SIX SIGMA AND BEYOND The Implementation Process
SIX SIGMA AND BEYOND A series by D.H. Stamatis Volume I
Foundations ...
66 downloads
514 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
SIX SIGMA AND BEYOND The Implementation Process
SIX SIGMA AND BEYOND A series by D.H. Stamatis Volume I
Foundations of Excellent Performance
Volume II
Problem Solving and Basic Mathematics
Volume III
Statistics and Probability
Volume IV
Statistical Process Control
Volume V
Design of Experiments
Volume VI
Design for Six Sigma
Volume VII
The Implementation Process
D. H. Stamatis
SIX SIGMA AND BEYOND The Implementation Process
ST. LUCIE PRES S A CRC Press Company Boca Raton London New York Washington, D.C.
SL316X FMFrame Page 4 Wednesday, October 2, 2002 8:24 AM
Library of Congress Cataloging-in-Publication Data Stamatis, D. H., 1947Six sigma and beyond : foundations of excellent performance / Dean H. Stamatis. p. cm. -- (Six Sigma and beyond series) Includes bibliographical references. ISBN 1-57444-314-3 1. Quality control--Statistical methods. 2. Production management--Statistical methods. 3. Industrial management. I. Title. II. Series. TS156 .S73 2001 658.5′62--dc21
2001041635
This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.
Visit the CRC Press Web site at www.crcpress.com © 2003 by CRC Press LLC St. Lucie Press is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 1-57444-314-3 Library of Congress Card Number 2001041635 Printed in the United States of America 1 2 3 4 5 6 7 8 9 0 Printed on acid-free paper Volume VII: The Implementation Process ISBN 1-57444-316-X
SL316X FMFrame Page 5 Wednesday, October 2, 2002 8:24 AM
To John and Helen Chalapis (my teachers, mentors, friends and Koumbari)
SL316X FMFrame Page 6 Wednesday, October 2, 2002 8:24 AM
SL316X FMFrame Page 7 Wednesday, October 2, 2002 8:24 AM
Preface In the first six volumes of this series, we followed a traditional writing style to explain, modify, and elaborate points, concepts, and issues as they pertained to the Six Sigma methodology. This volume is quite different and may be considered somewhat unorthodox. There are at least two reasons for this drastic shift in writing style. The first is that the material has already been presented in a very detailed fashion in the individual volumes; therefore, there is no need to repeat them. Second, this volume is an implementation volume, which means that the information is geared toward helping the reader in formalizing his own training, whatever that may be. We are very cognizant of the fact that a variety of organizations exist, each seeking its own application of the Six Sigma methodology. That is why we have developed the material in such a way as to help the reader in the development of his own needs. The predominant break of the material is Part I, in which we provide material essential to the reader in developing his own training. All human learning is about skills, knowledge, and attitudes (SKAs). Since we believe that these three attributes are the drivers for learning we have spent several chapters making sure that SKAs are identified and planned in the training. We believe that with the basic tools described in these chapters anyone can create a system for training adults that is much faster, has high user validity, and is exceptionally adaptable. (This is very important because part of the implementation process requires that black belts cascade the training to green belts.) The second break is Part II, in which we give the reader a prescriptive approach to training. We start by identifying the executives, champions, master black belts, black belts, and green belts and providing a general overview of the Six Sigma methodology. Within each of the groupings we also identify the objectives that each category should be responsible for and then gradually present the training outline for three options: 1) transactional, 2) technical, and 3) manufacturing. The outline is somewhat thematic in nature and in some places repeats what is contained in earlier chapters due to overlap in the knowledge requirements of the various levels of leadership. The reader is encouraged to return to the main volumes to extract more material and examples. (Although we would encourage the reader to generate personalized examples from his own processes.) In some cases, the outline merely identifies a topic without further explanation, for example, FMEA, TRIZ, QFD, control charts, capability, capability indices, DOE, Taguchi, and others. The reason for such laconic statements is that we have gone to great lengths to explain these terms elsewhere in earlier volumes. On the other hand, there are situations in which we find it necessary to either elaborate or explain or reemphasize certain issues, even though we have already explained them. This is because these items are very important in the process of Six Sigma.
SL316X FMFrame Page 8 Wednesday, October 2, 2002 8:24 AM
The third significant break is Part III, in which we discuss the training for DFSS and certification. This part also contains an epilog. The approach is the same as that described above. In addition, the reader will notice that the objectives are identified in such a way that transactional, technical, and manufacturing executives, champions, master black belts, black belts and green belts may benefit from the information. They are all grouped together in their respective categories. (For example, the objectives for the black belt are one entity covering transactional, technical, and manufacturing areas.) The difference is in the selection process for each, which will depend on the background of the individual group and the organization’s needs. In the case of the actual outlines, we have tried to make that distinction. However, the reader will notice that there is a great overlap in the content. This is not incidental. It is on purpose, because all groups must have virtually the same understanding of what Six Sigma is all about. The difference is in the detail of that knowledge. Furthermore, as we already mentioned, the reader will notice that the outlines for each of the training are short, in the sense that they are not very elaborative. Again, this is by design. We have tried to provide the structure of the content and we hope that the reader will turn to volumes one through six to obtain detailed information as needed. We also have tried not to give any examples or simulations in the outline because we hope that the reader will generate his own examples as they relate to his organization. (If you need to generate examples, you may want to use some from the individual volumes of this series.) Specifically, this volume contains the following chapters: Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Understanding the Learner and Instruction Front-End Analysis Design of Instruction Development of Material and Evaluation Delivery of Material and Evaluation Contract Training Six Sigma for Executives Six Sigma for Champions Six Sigma for Master Black Belts Six Sigma for Black Belts Six Sigma for Green Belts Six Sigma for General Orientation DFSS Training Six Sigma Certification
SL316X FMFrame Page 9 Wednesday, October 2, 2002 8:24 AM
Acknowledgments We have come to the end of this series on Six Sigma and Beyond and I am indebted to so many individuals who have helped directly or indirectly along the way. A series of volumes of this magnitude is necessarily based on a wide variety of original sources. While I have made original contributions in some specific areas of analysis, and certainly in the conceptual framework of the topic, the bulk of the material (i.e., SPC, DOE, Taguchi, project management, reliability, statistics and probability, value analysis and so many other topics) is based on or expanded from the contribution of others. I have very carefully shown the sources of these materials at the points they are discussed. I hope I have made no omissions. I am indebted to The Six Sigma Academy, The Biometrika, Institute of Mathematical Statistics, CRC Press, Tennessee Associates, Marketing News, McGrawHill, John Wiley & Sons, Prentice Hall, Ford Motor Company, Thompson Learning, Houghton Mifflin Company, American Supplier Institute, Mr. D. R. Bothe, and Dr. E. Buffa for granting me permission to use their material throughout these volumes. Special thanks to the people at CRC for helping me throughout this project in making the material presentable. You all are great! Thanks also to the hundreds of seminar participants and graduate students at Central Michigan University who over the years have helped in defining some of my thoughts and clarifying others. These two sources have indeed been the laboratory for many of my thoughts and approaches. Based on their contribution I have modified and changed quite a few items for the better. I am indeed grateful. My special thanks, however, are reserved for my family and especially my wife. Through her support and encouragement I was able to do this project without any reservation and difficulty. Thank you.
SL316X FMFrame Page 10 Wednesday, October 2, 2002 8:24 AM
SL316X FMFrame Page 11 Wednesday, October 2, 2002 8:24 AM
About the Author D. H. Stamatis, Ph.D., ASQC-Fellow, CQE, CMfgE, is president of Contemporary Consultants, in Southgate, Michigan. He received his B.S. and B.A. degrees in marketing from Wayne State University, his master’s degree from Central Michigan University, and his Ph.D. degree in instructional technology and business/statistics from Wayne State University. Dr. Stamatis is a certified quality engineer for the American Society of Quality Control, a certified manufacturing engineer for the Society of Manufacturing Engineers, and a graduate of BSI’s ISO 9000 lead assessor training program. He is a specialist in management consulting, organizational development, and quality science and has taught these subjects at Central Michigan University, the University of Michigan, and Florida Institute of Technology. With more than 30 years of experience in management, quality training, and consulting, Dr. Stamatis has served and consulted for numerous industries in the private and public sectors. His consulting extends across the United States, Southeast Asia, Japan, China, India, and Europe. Dr. Stamatis has written more than 60 articles and presented many speeches at national and international conferences on quality. He is a contributing author in several books and the sole author of 20 books. In addition, he has performed more than 100 automotive-related audits and 25 preassessment ISO 9000 audits, and has helped several companies attain certification. He is an active member of the Detroit Engineering Society, the American Society for Training and Development, the American Marketing Association, and the American Research Association, and a fellow of the American Society for Quality Control.
SL316X FMFrame Page 12 Wednesday, October 2, 2002 8:24 AM
SL316X FMFrame Page 13 Wednesday, October 2, 2002 8:24 AM
Tables Table 1.1 Table 1.2 Table 1.3 Table 1.4
Instructional events as they relate to the five types of learned capability Typical delivery systems and related information Standard verbs to describe learning capabilities Desirable sequence characteristics associated with five types of learning outcome Table 1.5 Decision cycle Table 1.6 Different routes to organizational payoff Table 1.7 Kirkpatrick’s evaluation with several examples
Table Table Table Table Table Table
2.1 2.2 2.3 2.4 2.5 2.6
Data collection techniques Contributing factors to problems Front end analysis report information Front end analysis formative evaluation checklist Information about essential tasks Task analysis formative evaluation checklist
Table Table Table Table Table
3.1 3.2 3.3 3.4 3.5
Example of content outline – changing a tire (terminal objective) Example of instructional plan Types of instructional media Learning principles Design of formative evaluation checklist
Table Table Table Table Table
4.1 4.2 4.3 4.4 4.5
Development principles Example of rough draft of text – changing a tire (terminal objective) Rough draft evaluation form Development of materials – formative evaluation checklist Evaluation: pilot testing – formative evaluation checklist
Table Table Table Table Table Table Table
5.1 5.2 5.3 5.4 5.5 5.6 5.7
A typical delivery plan Delivery of materials – formative evaluation checklist On-the-job application – formative evaluation checklist Post-instructional data collection tools Self-evaluation measurement tool Research design action plan Evaluation: post-instruction – formative evaluation checklist
SL316X FMFrame Page 14 Wednesday, October 2, 2002 8:24 AM
Table Table Table Table Table Table Table
6.1 6.2 6.3 6.4 6.5 6.6 6.7
Criteria for evaluating products Design and development principles Development principles Forms of rough drafts Typical audience’s response questionnaire Learner/supervisor post-instructional agreement Research design action plan
SL316X FMFrame Page 15 Wednesday, October 2, 2002 8:24 AM
Frequent Abbreviations in Six Sigma Methodology ANOVA COPQ COQ Cp Cpk CT CTC CTD CTP CTQ CTS CTX CTY D df DOE DPO DPU DVP EVOP EVP EWMA FMA FMEA GR&R J KPIV KPOV LCL LSL m MCP m.tot MTBF OA P-diagram PLEX
Analysis of variance Cost of poor quality Cost of quality Short-term process capability Long-term process capability Critical to (matrix) Critical to customer Critical to delivery Critical to process Critical to quality Critical to satisfaction Critical to process Critical to product Observed defects Degrees of freedom Design of experiment Defects per opportunity Defects per unit Design validation (verification) plan Evolutionary operation Engineering validation plan Exponential weighted moving average Failure mode analysis Failure mode and effect analysis Gage repeatability and reproducibility Units scrapped Key process input data Key process output data Lower control limit Lower specification limit Opportunities per unit Manufacturing control process Opportunities submitted Mean time between failure Orthogonal array Parameter diagram Planning experiment
SL316X FMFrame Page 16 Wednesday, October 2, 2002 8:24 AM
PPM PTAR PVP QFD R RSS S SIPOC SOP SPC SPM SS SSBB SSC SSGB SSMBB TDPO TOP U UCL USL WIP Y.A Y.final Y.ft Y.M Y.m Y.normal Y.rt Z.lt Z.shift Z.st
Parts per million Plan-Train-Apply-Review Process validation plan Quality function deployment Units repaired Root sum of squares Units passed Supplier-Input-Process-Operation-Customer Standard operating procedures Statistical process control Statistical process monitoring Sum of squares Six Sigma black belt Six Sigma champion Six Sigma green belt Six Sigma master black belt Total defect per unit Total opportunities Units submitted Upper control limit Upper specification limit Work in process Annual rate of improvement Final throughput First time yield Monthly rate of improvement Yield per opportunity Normalized yield Rolled-throughput yield Long-term sigma Shift factor Short-term sigma
SL316X FMFrame Page 17 Wednesday, October 2, 2002 8:24 AM
Table of Contents Part I Understanding Adult Training and Instructional Design ................................................... 1 Introduction..............................................................................................................3 What Is Diffusion? ....................................................................................................3 Characteristics of Innovations ...................................................................................4 The Process of Six Sigma Diffusion in the Organization ........................................5 Reference ...................................................................................................................6 Chapter 1
Understanding the Learner and Instruction .........................................7
Expectations for Participants.....................................................................................7 Prepare for Successful Learning ...............................................................................7 Prepare for Each Training Course.............................................................................8 Assume an Active Role in the Learning Environment .............................................9 Understanding Adult Learners.................................................................................10 To Start With, We Must Recognize That Adults Are Interested In.................10 Principles of Instructional Design...........................................................................14 Stages or Phases of Design .....................................................................................14 Conditions of Learning ....................................................................................15 Desirable Sequence Characteristics Associated with Five Types of Learning Outcome...............................................................................................20 References................................................................................................................24 Chapter 2
Front-End Analysis ............................................................................25
Introduction..............................................................................................................25 Problem-Solving Front-End Analysis .....................................................................26 Task Analysis ...........................................................................................................31 Steps in Task Analysis.............................................................................................33 References................................................................................................................39 Selected Bibliography..............................................................................................39 Chapter 3
Design of Instruction .........................................................................41
Preparation ...............................................................................................................41 Steps in Design of Instruction.................................................................................41 References................................................................................................................48 Selected Bibliography..............................................................................................48
SL316X FMFrame Page 18 Wednesday, October 2, 2002 8:24 AM
Chapter 4
Development of Material and Evaluation..........................................51
Steps in Development of Materials .........................................................................51 Planning ............................................................................................................51 Implementation ........................................................................................................52 Evaluation: Pilot Testing .........................................................................................55 Steps in Pilot Testing...............................................................................................57 Planning ............................................................................................................57 Implementation ........................................................................................................59 References................................................................................................................61 Selected Bibliography..............................................................................................61 Chapter 5
Delivery of Material and Evaluation .................................................63
Steps in Delivery of Materials ................................................................................63 Planning ............................................................................................................63 Preparation.................................................................................................63 Implementation ........................................................................................................66 On-the-Job Application............................................................................................69 Steps in On-the-Job Application .............................................................................69 Planning ............................................................................................................69 Preparation.................................................................................................69 Implementation ........................................................................................................71 Before Training........................................................................................................71 After Training ..........................................................................................................72 Evaluation: Post-Instruction ....................................................................................73 Steps in Post-Instructional Evaluation ....................................................................74 Planning ............................................................................................................74 Implementation ........................................................................................................77 References................................................................................................................79 Selected Bibliography..............................................................................................79 Chapter 6
Contract Training ...............................................................................81
Front-End Analysis ..................................................................................................81 Task Analysis ...........................................................................................................82 Design of Instruction ...............................................................................................83 Design of Job Aids ..................................................................................................84 Development of Materials .......................................................................................85 Evaluation: Pilot Testing .........................................................................................88 Delivery of Materials...............................................................................................88 On-the-Job Application............................................................................................90 Evaluation: Post-Instruction ....................................................................................90 References................................................................................................................92 Selected Bibliography..............................................................................................92
SL316X FMFrame Page 19 Wednesday, October 2, 2002 8:24 AM
Part II Training for the DMAIC Model................................ 93 Chapter 7
Six Sigma for Executives...................................................................95
Instructional Objectives — Executives....................................................................95 Recognize Customer Focus..............................................................................95 Business Metrics...............................................................................................95 Six Sigma Fundamentals..................................................................................96 Define Nature of Variables ...............................................................................98 Opportunities for Defects .................................................................................98 CTX Tree ..........................................................................................................98 Process Mapping ..............................................................................................98 Process Baselines..............................................................................................99 Six Sigma Projects ...........................................................................................99 Six Sigma Deployment.....................................................................................99 Measure.............................................................................................................99 Scales of Measure .....................................................................................99 Data Collection.................................................................................................99 Measurement Error....................................................................................99 Statistical Distributions ...........................................................................100 Static Statistics ........................................................................................100 Dynamic Statistics...................................................................................100 Analyze Six Sigma Statistics ..................................................................100 Process Metrics .......................................................................................101 Diagnostic Tools......................................................................................101 Simulation Tools......................................................................................101 Statistical Hypotheses .............................................................................101 Continuous Decision Tools ............................................................................101 Discrete Decision Tools..................................................................................101 Improve Experiment Design Tools ................................................................101 Robust Design Tools ......................................................................................102 Empirical Modeling Tools..............................................................................102 Tolerance Tools...............................................................................................102 Risk Analysis Tools ........................................................................................102 DFSS Principles .............................................................................................102 Control Precontrol Tools ................................................................................102 Continuous SPC Tools....................................................................................102 Discrete SPC Tools.........................................................................................102 Outline of Actual Executive Training Content — 1 Day .....................................102 Maximize Customer Value .............................................................................103 Minimize Process Costs .................................................................................103 Six Sigma Leadership............................................................................................103 The Six Sigma DMAIC Model......................................................................103
SL316X FMFrame Page 20 Wednesday, October 2, 2002 8:24 AM
How Six Sigma Fits .......................................................................................103 Leadership Prerequisites.................................................................................104 Deployment Infrastructure..............................................................................104 Sustaining the Gains.......................................................................................104 Project Review Guidelines .............................................................................104 Alternative Six Sigma Executive Training — 2 Days ..........................................105 Measurement...................................................................................................105 Maximizing the Customer Supplier Relationship..........................................106 The Classical vs. the Six Sigma Perspective of Yield...................................106 Traditional Yield View....................................................................................106 The Two Types of Defect Models..................................................................106 Process Characterization ................................................................................106 The Focus of Six Sigma — Customer Satisfaction and Organizational Profitability .....................................................................................................106 Definition of a Problem..................................................................................107 Roles and Responsibilities .............................................................................107 Roles of a Champion......................................................................................107 Roles of the Master Black Belt......................................................................107 Roles of the Black Belt ..................................................................................108 There Are Five Actions That Have Proven Critical to Continued Six Sigma Breakthrough ..........................................................109 Six Sigma Breakthrough ................................................................................109 Define..............................................................................................................110 Purpose ...........................................................................................................110 Questions to Be Answered .............................................................................110 A Typical Checklist for the Define Phase .....................................................110 Tools................................................................................................................111 Measure...........................................................................................................111 Purpose ...........................................................................................................111 Questions to Be Answered .............................................................................111 Typical Checklist for the Measure Phase ......................................................112 Tools................................................................................................................112 Analyze ...........................................................................................................112 Purpose ...........................................................................................................112 Questions to Be Answered .............................................................................112 Typical Checklist for the Analyze Phase .......................................................113 Tools................................................................................................................113 Improve...........................................................................................................113 Purpose ...........................................................................................................113 Questions to Be Answered .............................................................................113 Typical Checklist for the Improve Phase.......................................................114 Tools................................................................................................................114 Control ............................................................................................................114 Purpose ...........................................................................................................114 Questions to Be Answered .............................................................................114 Typical Checklist for the Control Phase ........................................................115
SL316X FMFrame Page 21 Wednesday, October 2, 2002 8:24 AM
Tools................................................................................................................115 Six Sigma — The Initiative............................................................................115 Process — Systematic Approach to Reducing Defects That Affect What Is Important to the Customer ...............................................................115 Six Sigma... the Practical Sense .............................................................116 Foundation of the Tools .................................................................................116 Getting to Six Sigma......................................................................................116 The Standard Deviation..................................................................................116 Chapter 8
Six Sigma for Champions................................................................117
Curriculum Objectives for Champion Training ....................................................118 Recognize .......................................................................................................118 Customer Focus.......................................................................................118 Business Metrics .....................................................................................118 Six Sigma Fundamentals.........................................................................118 Define..............................................................................................................120 Nature of Variables..................................................................................120 Opportunities for Defects........................................................................120 CTX Tree.................................................................................................121 Process Mapping .....................................................................................121 Process Baselines ....................................................................................121 Six Sigma Projects ..................................................................................121 Six Sigma Deployment ...........................................................................121 Measure...........................................................................................................122 Scales of Measure ...................................................................................122 Data Collection........................................................................................122 Measurement Error..................................................................................122 Statistical Distributions ...........................................................................122 Static Statistics ........................................................................................122 Dynamic Statistics...................................................................................123 Analyze ...........................................................................................................123 Six Sigma Statistics.................................................................................123 Process Metrics .......................................................................................124 Diagnostic Tools......................................................................................124 Simulation Tools......................................................................................124 Statistical Hypotheses .............................................................................125 Continuous Decision Tools .....................................................................125 Discrete Decision Tools ..........................................................................126 Improve...........................................................................................................126 Experiment Design Tools........................................................................126 Robust Design Tools ...............................................................................127 Empirical Modeling Tools.......................................................................127 Tolerance Tools .......................................................................................127 Risk Analysis Tools.................................................................................127 DFSS Principles ......................................................................................127
SL316X FMFrame Page 22 Wednesday, October 2, 2002 8:24 AM
Control ............................................................................................................128 Precontrol Tools ......................................................................................128 Continuous SPC Tools ............................................................................128 Discrete SPC Tools .................................................................................128 Six Sigma Project Champion Transactional (General Business and Service — Nonmanufacturing) Training .........................128 Six Sigma Breakthrough Goal .......................................................................129 Six Sigma Goal ..............................................................................................129 Comparison between Three Sigma and Six Sigma Quality..........................129 Short Historical Background..........................................................................129 Overview of the Big Picture ..........................................................................130 Identify Customer...........................................................................................132 The DMAIC Process ......................................................................................133 Detailed Model Explanation ..........................................................................135 Performance Metrics Reporting .....................................................................135 Establish Customer Focus ..............................................................................135 Define Variables: Key Questions Are.............................................................136 The Focus of Six Sigma.................................................................................136 Process Optimization......................................................................................136 Process Baseline: Key Questions Are ............................................................136 Process Mapping ............................................................................................137 Cause and Effect.............................................................................................138 The Approach to C&E Matrix .......................................................................138 Links of C&E Matrix to Other Tools ............................................................138 Basic Statistics................................................................................................138 Converting DPM to a Z Equivalent ...............................................................139 Basic Graphs...................................................................................................139 Analyze ...........................................................................................................140 Improve...........................................................................................................140 Control ............................................................................................................140 Six Sigma Project Champion — Technical Training............................................140 Six Sigma Breakthrough Goal .......................................................................141 Six Sigma Goal ..............................................................................................141 Comparison between Three Sigma and Six Sigma Quality..........................142 Short Historical Background..........................................................................142 Overview of the Big Picture ..........................................................................142 Identify Customer...........................................................................................145 The DMAIC Process ......................................................................................146 Detailed Model Explanation ..........................................................................148 Performance Metrics Reporting .....................................................................148 Establish Customer Focus ..............................................................................148 Define Variables: Key Questions Are.............................................................148 The Focus of Six Sigma.................................................................................149 Process Optimization......................................................................................149
SL316X FMFrame Page 23 Wednesday, October 2, 2002 8:24 AM
Process Baseline .............................................................................................149 Process Mapping ............................................................................................150 Cause and Effect.............................................................................................151 The Approach to C&E Matrix .......................................................................151 Links of C&E Matrix to Other Tools ............................................................151 Basic Statistics................................................................................................151 Converting DPM to a Z Equivalent ...............................................................152 Basic Graphs...................................................................................................152 Analyze ...........................................................................................................152 Improve...........................................................................................................153 Control ............................................................................................................153 Six Sigma Project Champion Training — Manufacturing ...................................153 Exploring Our Values ............................................................................................153 Short Overview...............................................................................................153 Six Sigma Manufacturing Champion Training — Getting Started ......................155 Tips on Success for Six Sigma Manufacturing Champion...................................165 Champion Issues....................................................................................................166 Project Report Out.................................................................................................169 Project Presentation Milestone Requirements — Week 1 Training .....................174 Presentation Goals ..........................................................................................174 Presentation Notes ..........................................................................................175 Project Presentation — Week 2.............................................................................175 Presentation Goals ..........................................................................................175 Presentation Notes ..........................................................................................175 Project Presentation – Week 3...............................................................................175 Presentation Goals ..........................................................................................175 Presentation Notes ..........................................................................................176 Project Presentation – Week 4...............................................................................176 Presentation Goals ..........................................................................................176 Presentation Notes ..........................................................................................176 Typical Champion’s Questions for the Project Review........................................177 In the Define Phase ........................................................................................177 Have You .................................................................................................177 For Each Individual Project, Have You: .................................................177 In the Measure Phase .....................................................................................177 Typical Questions at This Phase Should Be:..........................................177 In the Analyze Phase......................................................................................178 Typical Questions in This Phase Should Be: .........................................178 In the Improve Phase......................................................................................178 Typical Questions in This Phase Should Be: .........................................178 In the Control Phase.......................................................................................179 Typical Questions in This Phase Should Be: .........................................179 Reference ...............................................................................................................179 Selected Bibliography............................................................................................179
SL316X FMFrame Page 24 Wednesday, October 2, 2002 8:24 AM
Chapter 9
Six Sigma for Master Black Belts...................................................181
Instructional Objectives — Shogun (Master Black Belt) .....................................181 Recognize .......................................................................................................181 Customer Focus.......................................................................................181 Business Metrics .....................................................................................181 Six Sigma Fundamentals.........................................................................182 Define..............................................................................................................184 Nature of Variables..................................................................................184 Opportunities for Defects........................................................................184 CTX Tree.................................................................................................184 Process Mapping .....................................................................................184 Process Baselines ....................................................................................185 Six Sigma Projects ..................................................................................185 Six Sigma Deployment ...........................................................................185 Measure...........................................................................................................186 Scales of Measure ...................................................................................186 Data Collection........................................................................................186 Measurement Error..................................................................................186 Statistical Distributions ...........................................................................186 Static Statistics ........................................................................................187 Dynamic Statistics...................................................................................187 Analyze ...........................................................................................................188 Six Sigma Statistics.................................................................................188 Process Metrics...............................................................................................188 Diagnostic Tools......................................................................................189 Simulation Tools......................................................................................189 Statistical Hypotheses .............................................................................189 Continuous Decision Tools .....................................................................190 Discrete Decision Tools ..........................................................................191 Improve...........................................................................................................192 Experiment Design Tools........................................................................192 Robust Design Tools ...............................................................................194 Empirical Modeling Tools..............................................................................194 Tolerance Tools .......................................................................................194 Risk Analysis Tools.................................................................................195 DFSS Principles ......................................................................................195 Control ............................................................................................................195 Precontrol Tools ......................................................................................195 Continuous SPC Tools ............................................................................196 Discrete SPC Tools .................................................................................196 Training...........................................................................................................196 Chapter 10 Six Sigma for Black Belts ...............................................................199 Instructional Objectives — Black Belt..................................................................199
SL316X FMFrame Page 25 Wednesday, October 2, 2002 8:24 AM
Recognize .......................................................................................................199 Customer Focus.......................................................................................199 Business Metrics .....................................................................................200 Six Sigma Fundamentals.........................................................................200 Define..............................................................................................................202 Nature of Variables..................................................................................202 Opportunities for Defects........................................................................202 CTX Tree.................................................................................................202 Process Mapping .....................................................................................203 Process Baselines ....................................................................................203 Six Sigma Projects ..................................................................................203 Six Sigma Deployment ...........................................................................203 Measure...........................................................................................................204 Scales of Measure ...................................................................................204 Data Collection........................................................................................204 Measurement Error..................................................................................204 Statistical Distributions ...........................................................................204 Static Statistics ........................................................................................205 Dynamic Statistics...................................................................................205 Analyze ...........................................................................................................206 Six Sigma Statistics.................................................................................206 Process Metrics .......................................................................................206 Diagnostic Tools......................................................................................207 Simulation Tools......................................................................................207 Statistical Hypotheses .............................................................................207 Continuous Decision Tools .....................................................................208 Discrete Decision Tools ..........................................................................209 Improve...........................................................................................................210 Experiment Design Tools........................................................................210 Robust Design Tools ...............................................................................212 Empirical Modeling Tools..............................................................................212 Tolerance Tools .......................................................................................212 Risk Analysis Tools.................................................................................213 DFSS Principles ......................................................................................213 Control ............................................................................................................213 Precontrol Tools ......................................................................................213 Continuous SPC Tools ............................................................................214 Discrete SPC Tools .................................................................................214 Content of Black Belt Training — Outline...........................................................214 Transactional Training – 4-Week Training ....................................................215 Week 1 ...................................................................................................................215 Week 2 ...................................................................................................................226 Key Questions from Week 1 ..........................................................................226 Week 3 ...................................................................................................................228 Week 4 ...................................................................................................................233
SL316X FMFrame Page 26 Wednesday, October 2, 2002 8:24 AM
Technical Training — 4 Weeks .............................................................................237 Week 1 ...................................................................................................................237 Week 2 ...................................................................................................................252 Hypothesis Testing Introduction ....................................................................256 Parameters vs. Statistics .................................................................................256 Formulating Hypotheses.................................................................................257 Week 3 ...................................................................................................................257 Week 4 ...................................................................................................................263 Fractional Factorials .......................................................................................263 Control Plans ..................................................................................................277 Manufacturing Training – 4 Weeks.......................................................................281 Week 1 ...................................................................................................................281 Week 2 ...................................................................................................................296 Hypothesis Testing Introduction ....................................................................299 Week 3 ...................................................................................................................299 DOE Introduction ...........................................................................................300 Week 4 ...................................................................................................................303 Fractional Factorials .......................................................................................304 SPC Flowchart................................................................................................311 Control Plans ..................................................................................................318 Chapter 11 Six Sigma for Green Belts...............................................................323 Instructional Objectives — Green Belt .................................................................323 Recognize .......................................................................................................323 Customer Focus.......................................................................................323 Business Metrics .....................................................................................323 Six Sigma Fundamentals.........................................................................324 Define..............................................................................................................326 Nature of Variables..................................................................................326 Opportunities for Defects........................................................................326 CTX Tree.................................................................................................326 Process Mapping .....................................................................................326 Process Baselines ....................................................................................327 Six Sigma Projects ..................................................................................327 Six Sigma Deployment ...........................................................................327 Measure...........................................................................................................327 Scales of Measure ...................................................................................327 Data Collection........................................................................................328 Measurement Error..................................................................................328 Statistical Distributions ...........................................................................328 Static Statistics ........................................................................................328 Dynamic Statistics...................................................................................329 Analyze ...........................................................................................................329 Six Sigma Statistics.................................................................................329 Process Metrics .......................................................................................330
SL316X FMFrame Page 27 Wednesday, October 2, 2002 8:24 AM
Diagnostic Tools......................................................................................331 Simulation Tools......................................................................................331 Statistical Hypotheses .............................................................................331 Continuous Decision Tools .....................................................................332 Discrete Decision Tools ..........................................................................333 Improve...........................................................................................................334 Experiment Design Tools........................................................................334 Robust Design Tools ...............................................................................335 Empirical Modeling Tools.......................................................................336 Tolerance Tools .......................................................................................336 Risk Analysis Tools.................................................................................336 DFSS Principles ......................................................................................336 Control ............................................................................................................336 Precontrol Tools ......................................................................................336 Continuous SPC Tools ............................................................................336 Discrete SPC Tools .................................................................................337 Six Sigma Transactional Green Belt Training ......................................................337 The DMAIC Model in Detail ........................................................................341 The Define Phase............................................................................................341 Who Is the Customer?....................................................................................341 Measurement Phase ........................................................................................341 Measurement Systems Analysis .....................................................................343 The Analysis Phase ........................................................................................344 The Improvement Phase.................................................................................345 The Control Phase ..........................................................................................345 Selecting Statistical Techniques..............................................................348 Hypothesis Testing Introduction ....................................................................351 Parameters vs. Statistics .................................................................................351 Introduction to Design of Experiments..........................................................352 Screening Designs ..........................................................................................353 Control Plans ..................................................................................................357 Six Sigma Green Belt Training — Technical .......................................................362 Short Historical Background..........................................................................362 The DMAIC Process ......................................................................................364 The DMAIC Model in Detail ........................................................................364 Define ......................................................................................................364 Measure ...................................................................................................365 Analyze....................................................................................................367 Improve....................................................................................................367 Control.....................................................................................................368 Six Sigma Green Belt Training — Manufacturing ...............................................368 Phases of Process Improvement.....................................................................371 The Define Phase ....................................................................................371 The Measurement Phase .........................................................................372 Measurement Systems Analysis..............................................................373
SL316X FMFrame Page 28 Wednesday, October 2, 2002 8:24 AM
The Analysis Phase ........................................................................................374 The Improvement Phase..........................................................................375 The Control Phase...................................................................................375 Selecting Statistical Techniques..............................................................378 Hypothesis Testing Introduction ....................................................................381 Parameters vs. Statistics .................................................................................381 Introduction to Design of Experiments..........................................................382 Screening Designs ..........................................................................................383 Control Plans ..................................................................................................387 Reference ...............................................................................................................391 Chapter 12 Six Sigma for General Orientation..................................................393 Instructional Objectives — General ......................................................................393 Recognize .......................................................................................................393 Customer Focus.......................................................................................393 Business Metrics .....................................................................................394 Six Sigma Fundamentals.........................................................................394 Define..............................................................................................................395 Nature of Variables..................................................................................395 Opportunities for Defects........................................................................395 CTX Tree.................................................................................................395 Process Mapping .....................................................................................395 Process Baselines ....................................................................................396 Six Sigma Projects ..................................................................................396 Six Sigma Deployment ...........................................................................396 Measure...........................................................................................................396 Scales of Measure ...................................................................................396 Data Collection........................................................................................396 Measurement Error..................................................................................396 Statistical Distributions ...........................................................................396 Static Statistics ........................................................................................397 Dynamic Statistics...................................................................................397 Analyze ...........................................................................................................397 Six Sigma Statistics.................................................................................397 Process Metrics .......................................................................................397 Diagnostic Tools......................................................................................397 Simulation Tools......................................................................................397 Statistical Hypotheses .............................................................................397 Continuous Decision Tools .....................................................................397 Discrete Decision Tools ..........................................................................398 Improve...........................................................................................................398 Experiment Design Tools........................................................................398 Robust Design Tools ...............................................................................398 Empirical Modeling Tools.......................................................................398 Tolerance Tools .......................................................................................398
SL316X FMFrame Page 29 Wednesday, October 2, 2002 8:24 AM
Risk Analysis Tools.................................................................................398 DFSS Principles ......................................................................................398 Control ............................................................................................................398 Precontrol Tools ......................................................................................398 Continuous SPC Tools ............................................................................399 Discrete SPC Tools .................................................................................399 Outline of Content..........................................................................................399 Process Improvement .....................................................................................399 Define..............................................................................................................400 Measure...........................................................................................................400 Measurement ...........................................................................................400 Variation..........................................................................................................401 Sampling..................................................................................................401 Simple Calculations and Conversions ....................................................401 Analyze ...........................................................................................................402 Data Analysis...........................................................................................402 Cause-and-Effect Analysis ......................................................................402 Root Causes Verification .........................................................................402 Determine the Opportunity .....................................................................402 Improve....................................................................................................402
Part III Training for the DCOV Model ............................... 405 Chapter 13 DFSS Training..................................................................................407 The Actual Training for DFSS ..............................................................................407 Executive DFSS Training ......................................................................................408 DFSS Champion Training .....................................................................................416 DFSS – 2-Day Program ........................................................................................416 DFSS Champion Training Outline — 4 Days ......................................................417 Project Member and BB DFSS Training ..............................................................421 Week 1 ............................................................................................................422 DCOV Model in Detail ..................................................................................429 The Define Phase ....................................................................................429 The Characterize Phase...........................................................................431 Ideal Function and P-Diagram .......................................................................433 Identifying Technical Metrics ........................................................................434 Week 2 ............................................................................................................438 The Optimize Phase ................................................................................438 Design for Producibility .................................................................................446 Deliverables/Checklist for the Optimize Phase .............................................446 The Verify Phase ............................................................................................447 Step 1: Update/Develop Test Plan Details.....................................................447
SL316X FMFrame Page 30 Wednesday, October 2, 2002 8:24 AM
Step 2: Conduct Test ......................................................................................449 Step 3: Analyze/Assess Results .....................................................................450 Step 4: Does the Design Pass Requirements? ...............................................450 Step 5: Develop Failure Resolution Plan.......................................................451 Step 6: Record Actions on Design Verification Plan and Record (DVP&R) ........................................................................................................452 Step 7: Complete DVP&R .............................................................................453 Selected Bibliography............................................................................................454 Chapter 14 Six Sigma Certification....................................................................455 The Need for Certification ....................................................................................457 General Comments Regarding Certification as It Relates to Six Sigma .............459 Conclusion .............................................................................................................461 References..............................................................................................................463 Epilog ....................................................................................................................465 Glossary ................................................................................................................467 Selected Bibilography..........................................................................................535 Index for Volume VII ..........................................................................................539 Index for Volume I...............................................................................................547 Index for Volume II .............................................................................................565 Index for Volume III............................................................................................571 Index for Volume IV............................................................................................579 Index for Volume V..............................................................................................591 Index for Volume VI ............................................................................................605
SL316XCh00Frame Page 1 Monday, September 30, 2002 8:16 PM
Part I Understanding Adult Training and Instructional Design
SL316XCh00Frame Page 2 Monday, September 30, 2002 8:16 PM
SL316XCh00Frame Page 3 Monday, September 30, 2002 8:16 PM
Introduction In 400 B.C., the Greek playwright Sophocles said, “You must learn by doing the thing, though you think you know it, you have no certainty until you try.” So it is with Six Sigma methodology. Many claim to know it, and others claim they have done it. The fact of the matter is, however, that inconsistencies exist in Six Sigma implementation, training, and results. This is partly due to the fact that, as of this writing, there still is no recognized body of knowledge (BOK) (the reader should be aware that the published American Society for Quality (ASQ) and others, including the author’s, that have attempted to identify the BOK have not been met with 100% acceptance, including as regards certification). In addition, Six Sigma methodology is to a certain extent enshrouded in mystique. It is hoped that we will help diffuse this mystique and present the implementation and training in a format that many can use. We start by focusing on diffusion.
WHAT IS DIFFUSION? Diffusion is the process by which an innovation is communicated through certain channels over time among members of a social system. (In our case, the social system is the organization). It is a special type of communication in that the messages are concerned with new ideas. Communication is a process in which participants create and share information with one another in order to reach a mutual understanding. (In our case, the mutual understanding is twofold: a) customer satisfaction and b) organizational profitability, however defined.) This definition implies that communication is a process of convergence (or divergence) as two or more individuals exchange information in order to move toward each other (or apart) in the meanings that they ascribe to certain events. (In our case, it is hoped that the issue of convergence will be the predominant factor for improvement.) We think of communication as a two-way process of convergence, rather than as a one-way, linear act in which one individual seeks to transfer a message to another. Such a simple conception of human communication may accurately describe certain communication acts or events involved in diffusion, such as when a change agent seeks to persuade a client to adopt an innovation. But when we look at what came before such an event and at what follows, we often realize that this sort of communication is only one part of a total process in which information is exchanged between the two individuals. For example, the client may come to the change agent with a problem or need, and the innovation is recommended as a possible solution. And if we look at the change agent–client interaction in a broader context, we may see that their interaction continues through several cycles and is indeed a process of information exchange. (In our case, the agent is generally the sponsor/champion, and the interaction is between master black belts, black belts and green belts.)
3
SL316XCh00Frame Page 4 Monday, September 30, 2002 8:16 PM
4
Six Sigma and Beyond: The Implementation Process
Thus, diffusion is a special type of communication in which messages are concerned with a new idea. (In our case, the new idea is the Six Sigma methodology.) It is this newness of the idea in the message content of communication that gives diffusion its special character. The newness means that some degree of uncertainty is involved. Uncertainty is the degree to which a number of alternatives are perceived with respect to the occurrence of an event and the relative probability of these alternatives; it implies a lack of predictability, structure, and information. In fact, information represents one of the main means of reducing uncertainty. That is why when dealing with a given level of Six Sigma in any capacity there must be open communication. Information is the difference in matter–energy that affects uncertainty in a situation where a choice exists among alternatives (Rogers and Kincaid, 1981, p. 64). (Information reduces uncertainty. Communication is the exchange of information. Furthermore, communication diffuses anxiety and fear and, above all, encourages participation and ownership.) Diffusion is a kind of social change, defined as the process by which alteration occurs in the structure and function of a social system. When new ideas are invented, diffused, and adopted or rejected, leading to certain consequences, social change occurs. (In our case, we expect change — a positive change. In fact, we expect a breakthrough change in both customer satisfaction and organizational profitability.) Of course, such change can happen in other ways, too, for example, through a political revolution or through a natural event like a drought or earthquake. The reader will notice that we use diffusion and dissemination interchangeably, although some will disagree as to their meaning, because the distinction often is not very clear in actual practice. The general convention is to use the word diffusion to include both the planned and the spontaneous spread of new ideas. (To be sure, any Six Sigma initiative is indeed planned and very rarely, if ever, unplanned.) But we do find it useful to distinguish between centralized and decentralized diffusion systems. In a centralized diffusion system, decisions about such matters as when to begin diffusing an innovation, who should evaluate it, and through what channels it should be diffused are made by a small number of officials or technical experts at the head of a change agency. In a decentralized diffusion system, such decisions are more widely shared by the clients and potential adopters; here, horizontal networks among clients are the main mechanism by which innovations spread. In fact, in extremely decentralized diffusion systems, there may not be a change agency; potential adopters are solely responsible for the self-management of the diffusion of innovations. New ideas may grow out of the practical experience of certain individuals in the client system rather than come from formal research and development activities.
CHARACTERISTICS OF INNOVATIONS Just because something is new does not mean that a) it is better or b) people will accept it. The characteristics of innovations, as perceived by individuals, help to explain their different rate of adoption. • Relative advantage is the degree to which an innovation is perceived as better than the idea it supersedes. The degree of relative advantage may be measured in economic terms, but social prestige factors, convenience,
SL316XCh00Frame Page 5 Monday, September 30, 2002 8:16 PM
Introduction
•
•
•
•
5
and satisfaction are also often important components. It does not matter so much whether an innovation has a great deal of “objective” advantage. What does matter is whether an individual perceives the innovation as advantageous. The greater the perceived relative advantage of an innovation, the more rapid its rate of adoption. Compatibility is the degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters. An idea that is incompatible with the prevalent values and norms of a social system will not be adopted as rapidly as an innovation that is compatible. The adoption of an incompatible innovation often requires the prior adoption of a new value system. Complexity is the degree to which an innovation is perceived as difficult to understand and use. Some innovations are readily understood by most members of a social system; others are more complicated and will be adopted more slowly. In general, new ideas that are simpler to understand will be adopted more rapidly than innovations that require the adopter to develop new skills and understandings. Trialability is the degree to which an innovation may be experimented with on a limited basis. New ideas that can be tried on the installment plan will generally be adopted more quickly than innovations that are not divisible. Observability is the degree to which the results of an innovation are visible to others. The easier it is for individuals to see the results of an innovation, the more likely they are to adopt it
In the case of Six Sigma, all the characteristics are present and indeed qualify the Six Sigma methodology as an innovation. The relative advantage is that for the past several years many large companies have claimed tremendous gains in profitability and advantages over their competitors. In terms of compatibility, to implement Six Sigma your organization must already have in place a quality foundation, such as Total Quality Management, that it does not conflict with Six Sigma. As for complexity, Six Sigma is complex enough to require intensive training and a longterm commitment from management before results are apparent. As regards trialability, our experience indicates that companies that immerse themselves in Six Sigma encourage their black belts to identify small projects and visible projects. The reason projects are small and visible is that such projects ensure that the methodology delivers what people say about it and produces success stories. There is nothing better than your own success story. Finally, in terms of observability, early projects are very carefully screened and monitored for improvement and used to convince others in the organization.
THE PROCESS OF SIX SIGMA DIFFUSION IN THE ORGANIZATION There is nothing more difficult to plan, more doubtful of success, or more dangerous to manage than the creation of a new organizational order. “Whenever his enemies have occasion to attack the innovator they do so with the passion of partisans, while
SL316XCh00Frame Page 6 Monday, September 30, 2002 8:16 PM
6
Six Sigma and Beyond: The Implementation Process
the others defend him sluggishly so that the innovator and his party alike are vulnerable.” These words were written in 1513 by N. Machiavelli (The Prince), yet they are quite appropriate for today’s organizational cultures — especially in the Six Sigma diffusion process. To implement the Six Sigma methodology we must understand that that in itself is a process. The process is made of seven steps: • Initiation. Management investigates whether or not they should adopt the methodology. • Agenda setting. Management discusses the relevancy of internal problems and the benefits of the Six Sigma methodology. Another way of looking at this is through a trade-off analysis of the advantages and disadvantages that the Six Sigma methodology can offer the organization beyond the current system. • Matching. Management agrees to give Six Sigma a try with specific objectives that fit the organization. This is the critical point, because it is here that the decision to adopt takes place. • Implementation. All the events, actions, and decisions involved in installing the Six Sigma system in the organization are considered. • Redefining and restructuring. This is also a critical point in the implementation process because modifications and organizational structures may be changed to accommodate the expected changes as a result of Six Sigma. • Clarifying. This is the stage in the implementation process where more precise process definitions are needed for improvement. For example, a classic clarifying issue is the notion of understanding the “operational definition” and the “true” SIPOC model. • Routinizing. This is the stage of a mature organization using Six Sigma methodology as an ongoing activity throughout the organization for major problems and design issues. Our focus in this volume is the actual implementation process. Implementation in the context of Six Sigma is an issue of training, which is one of the most important characteristics that distinguishes Six Sigma from any other program. What makes it special is both the amount of time required and the financial outlays associated with preparing an organization to tackle problems of improvement. Therefore, in this volume, we are going to discuss how to maximize learning using some of the latest theories and approaches of instructional design; in addition, we will present comprehensive requirements and outlines for appropriate and applicable training for each level of Six Sigma. Finally, we will also discuss the application of specific tools in the DMAIC as well as in the DCOV models.
REFERENCE Rogers, E. M. and Kincaid, D. L. (1981). Communication networks: toward a new paradigm for research. Free Press. New York.
SL316XCh01Frame Page 7 Monday, September 30, 2002 8:16 PM
1
Understanding the Learner and Instruction EXPECTATIONS FOR PARTICIPANTS
Education is the single greatest investment you will ever make in implementing the Six Sigma methodology. This education will give you the potential to alter your perceptions, thinking, and behaviors; it may also empower you to choose work and interests that will add additional meaning to your career life as well as to the organization’s culture. Like all endeavors paying a high dividend, the personal cost of attaining higher learning is considerable. Every participant must contribute time, resources, and energy. The following recommendations are intended to help you maximize your investment in this lengthy process.
PREPARE FOR SUCCESSFUL LEARNING Approach learning experiences with an open mind, set challenging goals, and monitor your progress. • Familiarize yourself with the goals and objectives of your specific program. • Set challenging goals for your own learning: • Create a personalized study plan for your training program and the individual material of the course in which you enroll. • Periodically assess progress using self, instructor(s), and peer feedback (take advantage of other similar courses that may help in your training. For example: DOE, Powerpoint presentation, statistical software, and so on). • Schedule frequent, self-paced study sessions with fellow participants or the instructor — or even the champion and even the master black belt. They are there to help you. Use them! Take a personal approach to learning • • • •
Reflect on your own thinking, learning, and prior experiences. Relate outside activities to course material and training activities. Analyze how new information relates to existing knowledge. Clarify your thinking and the assimilation of new knowledge by asking your instructors questions and actively participating in classroom discussions, online chat-rooms, or e-mail discussion lists — if available.
7
SL316XCh01Frame Page 8 Monday, September 30, 2002 8:16 PM
8
Six Sigma and Beyond: The Implementation Process
Seek and understand scholarly research • Use libraries and other information sources, including the organization’s library. This is not as bad as it sounds. Sometimes we do have to do some kind of research to find information about our customers, competitors, and the market and, of course, to track down technical information. • Develop proficiency in the use of library research databases and especially your own organization’s databases for things gone wrong, things gone right, warranty, things learned, and so on. • Acquire the ability to critically assess the quality and validity of the information sources you use. • Actively integrate scholarly knowledge and research evidence into training discussions and course assignments; after all, one of the objectives in Six Sigma is to introduce new tools, when appropriate and applicable, to solve problems. • Learn how to compose a research paper or complete a research project including selection of appropriate topics and resources, a literature review, and the proper citation of references. This can be very helpful if you are assigned to a DFSS project and you are trying to identify the “ideal function.”
PREPARE FOR EACH TRAINING COURSE Know the rules • Review the objectives for the course and any policies regarding class participation, attendance, overdue assignments, and make-up project assignments. • Ask how much work will be expected of you (e.g., in and out of class, assignments, projects, and so on) and arrange your work/study schedule accordingly. Know course materials • Take advantage of periodic classroom reviews of previously covered content. • Seek additional learning resources to fill any knowledge gaps and expand your understanding of course content. • Realize that meeting course objectives often entails knowledge of material not directly mentioned by the course instructor or included in course materials. For example, DOE, software manipulation, and many specific objectives that may be required to pursue Six Sigma.
SL316XCh01Frame Page 9 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
Build effective working relationships with your instructors, other black belts, master black belts, and champion, as well as other participants. • View your instructors as “facilitators,” offering guidance and feedback on your personal learning process. • Seek additional contact and communication with your instructors, other black belts, master black belts, and champion, as well as other participants to enrich the learning experience. • Be tolerant of opposing views and treat others with respect and civility. • Seek support and advice from your instructors, peers, other black belts, master black belts, and mentors.
ASSUME AN ACTIVE ROLE IN THE LEARNING ENVIRONMENT Bring awareness and a sense of purpose: • Expect to earn your “grade.” • Attend all class sessions, including all scheduled activities. Try to minimize, if not avoid, all double scheduling during the training hours. • Maintain compliance with any and all project deadlines. • Meet expectations regarding project integrity (cost, measurement, capability, etc. are some of the issues we all like to cut corners with). Bring knowledge, perspectives, and interest: • Actively participate in class activities. • Complete all required reading assignments prior to each class meeting and read suggested material. • Expect your ideas to be challenged and prepare to support them with facts, research evidence, and expert judgment, whether discussing concepts online, asynchronously, or in the classroom. Participants and the project assignment • Meeting recommended deadlines for completing required assignments and projects. • Participating in all scheduled virtual or asynchronous classroom discussions or working directly with the instructor to negotiate a suitable alternative. • Taking advantage of optional learning activities, suggested readings, and opportunities for informal virtual or asynchronous communication with the course faculty and fellow students.
9
SL316XCh01Frame Page 10 Monday, September 30, 2002 8:16 PM
10
Six Sigma and Beyond: The Implementation Process
UNDERSTANDING ADULT LEARNERS To be effective in any educational endeavor one must understand the learner. However, the adult learner has quite a few idiosyncrasies that are not present with the child learner. Perhaps the most important one is the fact that the adult learner views learning as a means to an immediate end. In other words, the adult learner wants to learn things as they pertain to his work “right now.” Learning is more applicationdriven than theoretical. As a consequence, in this part of the book we are going to provide a very general overview of some of the issues and concerns in understanding the learner, the material, and, above all, the instructional process. For more information on adult learners, see Wlodkowski (1985), Cross (1981), Wonder and Donovan (1984), Knox (1986), and Brookfield (1986).
TO
START WITH, WE MUST RECOGNIZE THAT ADULTS ARE INTERESTED IN
• Enhancing proficiency at a given task (work-related) • Development and learning — recognition that different learners learn at different paces and with different methods (learning style, change events, responding to learner’s diversity, occasions for new learning) • Influencing participation by impulsive questioning and answering The instructor or facilitator, therefore, must make sure that the following are observed at all times, so that learning may be enhanced. • • • •
Respect Reasons Options Making learning relevant to their experiences
Every one of these items may contribute to learning; however, in specific terms, all these may be derived through appropriate and applicable tasks in the following: • • • • • • • • • • • • •
Procedures presentations Active learning Meaning Variety Stages of the program Affective and cognitive elements Interpersonal relationships Past and future purposes Support and challenge Models Self direction for learners Feedback Flexibility
SL316XCh01Frame Page 11 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
11
This is not as easy as it sounds. However, if one were to use the principles of Instructional Design, the instruction becomes very systematic and productive. The idea of Instructional Design is based fundamentally on the SKA model. The SKA model focuses on three areas. • Skill (Have you…?) • Knowledge (Do you…?) • Ability (Can you…?) Therefore, to bring out the “best” in a participant, the instructor or facilitator must apply managerial skills in the instruction itself. Managerial skills are known as events of instruction (Gagne and Briggs, 1979). The more these events are understood by the instructor or facilitator, the better the instruction, the better the comprehension of the participant, and the more effective the overall training. The events are: • Gaining attention — reception of patterns of neutral impulses • Informing the learner of the objective — activating a process of executive control • Stimulating recall of prerequisite learning — accessing working memory • Presenting the stimulus material — emphasizing features for selective perception • Providing “learning guidance” — encoding material semantically • Eliciting performance — activating a response organization • Providing feedback about performance correctness — establishing reinforcement • Assessing performance — activating retrieval; making reinforcement possible • Enhancing retention and transfer — providing cues and strategies for retrieval An example of how these nine events may be used as part of the instruction is shown in Table 1.1. This table associates the nine events with the five basic types of learned capabilities. For more information on the conditions of learning, see Gagne (1977) and Travers (1982). In the case of Six Sigma training, all these play an important role since the people who are undergoing the training may have different experiences and certainly different backgrounds as well as education. Because of these differences the educational/training implications must be focused on three areas: • Planning for learning • Managing learning • Instructing In the course of Six Sigma training, it will be necessary to consider at least two methods of instruction. The first is the group presentation in which a facilitator or
Providing learning guidance
Presenting the stimulus material
Stimulating recall of prerequisites
Provide verbal cues to proper combining sequence
Provide prompts and hints to novel solution
Provide verbal links to a larger meaningful context
Introduce stimulus change; variations in sensory mode Provide description and Clarify the general Indicate the kind of example of the nature of the solution verbal question to be performance to be expected answered expected Stimulate recall of Stimulate recall of task Stimulate recall of subordinate concepts strategies and context of organized and rules associated intellectual information skills Present examples of Present novel problems Present information in concept or rule prepositional form
Information
Gain attention Informing learner of objectives
Cognitive skill
Intellectual skill
Provide for observation of model’s choice of action, and of reinforcement received by model
Stimulate recall of relevant information, skills, and human model identification Present human model, demonstrating choice of personal action
Provide example of the kind of action choice aimed for
Attitude
Provide external stimuli for performance, including tools or implements Provide practice with feedback of performance achievement
Stimulate recall of executive sub routine and part skills
Provide a demonstration of the performance to be expected
Motor skill
12
Instructional Event
Type of Capability
TABLE 1.1 Instructional events as they relate to the five types of learned capability
SL316XCh01Frame Page 12 Monday, September 30, 2002 8:16 PM
Six Sigma and Beyond: The Implementation Process
Enhancing retention and transfer
Assessing performance
Provide spaced reviews including a variety of examples
Ask learner to apply rule or concept to new examples Confirm correctness of rule or concept application Learner demonstrates application of concept or rule
Eliciting the performance
Providing feedback
Intellectual skill
Instructional Event
Provide occasions for a variety of novel problem solutions
Learner originates a novel solution
Confirm originality of problem solution
Ask for problem solution
Cognitive skill
Provide verbal links to additional complexes of information
Ask for information in paraphrase, or in learner’s own words Confirm correctness of statement of information Learner restates information in paraphrased form
Information
Type of Capability
TABLE 1.1 Instructional events as they relate to the five types of learned capability
Ask learner to indicate choices of action in real or simulated situations Provide direct or vicarious reinforcement of action choice Learner makes desired choice of personal action in real or simulated situation Provide additional varied situations for selected choice of action
Attitude
Learner continues skill practice
Provide feedback on degree of accuracy and timing of performance Learner executes performance of total skill
Ask for execution of the performance
Motor skill
SL316XCh01Frame Page 13 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction 13
SL316XCh01Frame Page 14 Monday, September 30, 2002 8:16 PM
14
Six Sigma and Beyond: The Implementation Process
an instructor leads the presentation of the material through small or large group activities by way of lectures, discussion simulations, etc. The second, and somewhat less frequent in Six Sigma, is the individualized approach to instruction. Here the participant may work independently at a self pace. This approach is self-centered and learner-determined based on specific situations. Typical delivery systems and related information are shown in Table 1.2.
PRINCIPLES OF INSTRUCTIONAL DESIGN Now that we have reviewed some of the instruction systems, let us examine the actual instructional design process. For extensive information on instructional design, see Briggs (1977), Richey (1986), Reigeluth (1987, 1983), Seels and Richey (1994), Dick and Carey (1978). • Instruction is a human undertaking whose purpose is to help people learn • Instruction is a set of events which affect learners in such a way that learning is facilitated Therefore, instruction must be planned to accomplish: • The aiding of learning • Immediate and long range goals (we are focusing on transferring knowledge) • Human development (no one is educationally disadvantaged) • System approach (analysis of need or goals to evaluation)
STAGES OR PHASES OF DESIGN As with anything else, there is a process that one must follow to facilitate learning. That process is called instructional design and it has ten phases. They are: • • • • • • • • • •
Front-end analysis Task analysis Product survey Design instruction Design of job aids Development of material Evaluation Delivery of materials On the job application Evaluation
We are going to address all of them, with the exception of phases 3 and 5. The reason for this is that in both cases the application to Six Sigma is very straightforward and contains no bottlenecks or unusual problems. The product is already
SL316XCh01Frame Page 15 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
15
known, and the instructional aids simply consist of handouts of statistical formulas with their application or perhaps special forms. In all cases, the instructional design may also be subdivided into levels, such as system, course, and lesson, with each one having different requirements and instructional characteristics. System level may be interpreted as a curriculum. In Six Sigma training, the system is the entire methodology — from the requirements of the executive, to the champion, to the master black belt, to the black belt, and to the green belt. The requirements under the system are to develop • • • • • • • • •
Analysis of need, goals, and priorities Analysis of resources, constraints, and alternatives to the delivery system Determination of scope and sequence of curriculum and courses Delivery system design Trainer preparation Formative evaluation Pilot and revision Summative evaluation Installation and diffusion
Course level may be interpreted as the specific training of the executive, champion, master black belt, black belt, and green belt. Course-level requirements are • Determining course structure and sequence • Analysis of course objectives Lesson level may be interpreted as the content of each course broken down on a daily basis. Lesson-level requirements are • • • •
Defining performance objectives Preparing lesson plans (modules) Developing and selecting materials and media Assessing participant performance
CONDITIONS
OF
LEARNING
In order for anyone to learn, the instructor and or facilitator must be aware of the “learning process.” Some of the issues here are The association tradition associates learning with known items. This may be done through a) continuity (building on old knowledge or new knowledge systematically) or b) repetition of facts and items of interest. Repetition does not have to be boring and devoid of context. Rather, it may be conducted as a summary, review, Q and A, or in several other formats. Trial and error is the most common and yet most inefficient form of instruction. Under this method, we try things as we go and constantly evaluate the outcome.
Books, other reading materials Charts, chalkboard, displays Teacher Guest speakers Real objects, models Parts Overheads Movies/videos Programmed texts Books Modules Audio-visual devices
Group instruction
Individualized instruction
Possible Media
Delivery System Reading Listening Observing demonstrations Manipulating objects Visits Participating in simulation(s) Home study Exercises and projects Reading Responding Self-pacing Self-checking
Learner Activity
Lectures Discussions Demonstrations Oral quizzing Corrects papers Evaluate results Prepares reports Field trips Placement testing Diagnostic testing Monitors progress Remedial instruction
Methods, Teacher Roles
16
TABLE 1.2 Typical delivery systems and related information
SL316XCh01Frame Page 16 Monday, September 30, 2002 8:16 PM
Six Sigma and Beyond: The Implementation Process
Home study
Work related
Work at specified locations Involves variety of persons and equipment as media Books Modules References
Writing papers Conferring with instructor Any or all of the above for study portion of program Any assigned work function, under supervision Home study by reading, completion of exercises Communications with instructor
Laboratories Learning centers and associated equipment and materials Any or all of the above for study portion of program
Independent study
Reading to each other Performing exercises Performing simulations Discussion Watching presentations Assists in locating and using resources Reading and independent study Conducting library searches Reviewing lab experiments
Books Exercises Simulation activities Slide/tape presentations Sound recordings Completing team assignments Books Libraries Reading lists
Small group
Learner Activity
Possible Media
Delivery System
TABLE 1.2 Typical delivery systems and related information
Conducts evaluations of progress Any or all of the above for study portion of program Coordinates work assignment with study portion of program Assigns materials, exercises, and evaluates exercises May prepare and mail supplementary materials
Assesses level of participant progress Forms small groups for specific lessons Evaluate exercises set up and results Assesses overall progress Keeps records Introduces new projects to group(s) Advisor performs guidance function Suggests or assigns tasks Confers with learner upon request or as scheduled
Methods, Teacher Roles
SL316XCh01Frame Page 17 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction 17
SL316XCh01Frame Page 18 Monday, September 30, 2002 8:16 PM
18
Six Sigma and Beyond: The Implementation Process
Our evaluation of achievement is based on positive reinforcement in accordance with the expectations and objectives we have set. Conditioned response is a very common approach to instructing for “learning,” as it appears simple and harmless. But it is neither. It presupposes magical powers on the part of the instructor or facilitator to a) interpret voluntary and involuntary responses from the participants and b) figure out the learner’s “insight.” This implies that the instructor or facilitator knows the participant’s optimal learning style. The possibilities are the holistic (Gestalt learning) approach and the participant’s prior learning ability and benchmark. While both approaches are acceptable, the instructor or facilitator must be cognizant of both and use them as necessary. Remember, different people learn differently for many reasons. The second item of concern for conditioned response is the presumption that the participant learns through verbal associates (memorization). This may be a very serious problem (in fact, a trap) in the Six Sigma methodology, especially since many formulas must be learned. We recommend that the instructor not rely on memorization but employ repetition exercises. This is a much better approach, and in the long run, the participant is better equipped to transfer the learned information outside of the classroom environment. Miscellaneous: the primary concern here is motivation. The instructor’s (external condition) as well as the participant’s (internal condition) motivation has a lot to do with learning. For example, if the “learning event” is the central focus of our experience, then the external factors will be: a) continuity, or the temporal arrangement of conditions, b) repetition, and c) reinforcement, or the arrangement of contingencies. Note that none of these factors is learner-dependent. In fact, each one is dependent on the instructor or facilitator. In contrast, internal factors are dependent on the learner and represent a) factual formation, i.e., they may be presented or recalled from prior learning, b) intellectual skills, in that they are recalled from prior learning, and c) strategies, i.e., they are self-activated from prior practice and or experience. Note that none of these is instructor-dependent. The learner associates previous experience and learning with current material. The richer the experiences, the more pleasant and value-added the current material and knowledge. So why do we bother with the above items? What is their relation to Six Sigma training? It turns out that the above conditions of learning are inherently important in Six Sigma, because Six Sigma methodology provides some very challenging items for the instructor and facilitator in the areas of “learning capability.” They are: 1. Intellectual skills (demonstrating symbol use) • Discrimination (distinguish) • Concrete concept (spatial relation) • Defined concept (using a definition, clarification occurs) • Higher-order rule (combination) 2. Cognitive strategy (efficient use of recalling or solving a problem) 3. Verbal information (recall) 4. Motor skill (action) 5. Attitude
SL316XCh01Frame Page 19 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
19
TABLE 1.3 Standard verbs to describe learning capabilities Capability
Capability Verb
Example (Action Verb in Italics)
Intellectual Skill Discrimination
DISCRIMINATES
Concrete Concept
IDENTIFIES
Defined Concept Rule
CLASSIFIES DEMONSTRATES
Higher-order Rule (Problem Solving) Cognitive Strategy
GENERATES ORIGINATES
Information
STATES
Motor Skill Attitude
EXECUTES CHOOSES
discriminates, by matching French sounds of “u” and “ou” identifies, by naming the root, leaf, and stem of representative plants classifies, by using a definition, the concept “family” demonstrates, by solving verbally stated examples, the addition of positive and negative numbers generates, by synthesizing applicable rules, a paragraph describing a person’s actions in a situation of fear originates a solution to the reduction of air pollution by applying model of gaseous diffusion states orally the major issues in the development of the Six Sigma methodology executes backing a car into driveway Chooses playing golf as a leisure activity
Table 1.3 provides some very simple examples of standard verbs to describe learning capabilities. On the other hand, a motivating or an enthusiastic instructor plays a major role in the learning ability of the participant. Some characteristics and skills of motivating instructors are: • They know something beneficial for adults • They know the subject matter well • They are prepared to convey their knowledge through an instructional process • They have a realistic understanding of learners’ needs and expectations for what they are offering them to learn • They have adapted the instruction to the learners’ level of experience and skill development • They continually consider the learners’ perspective • They care about and value what they teach, both for themselves as well as for the learners • This commitment is expected in the instruction with appropriate degrees of emotion, animation, and energy • rapid, uplifting, varied vocal delivery • dancing, wide-open eyes • frequent, demonstrative gestures • varied, dramatic body movements • varied, emotive facial expressions
SL316XCh01Frame Page 20 Monday, September 30, 2002 8:16 PM
20
Six Sigma and Beyond: The Implementation Process
• selection of varied words, especially adjectives • ready, animated acceptance of ideas and feelings • exuberant overall energy level The benefits of a motivating instructor or facilitator are demonstrated in the instruction process through the creation of a positive attitude. A motivating instructor: • Shares something of value with her learners • Concretely indicates her cooperative intentions to help adults learn • Reflects, to the degree authentically possible, the language, perspective, and attitudes of her learners • Gives her rationale when issuing mandatory assignments or training requirements • Allows for introductions • Eliminates or minimizes any negative conditions that surround the subject • Ensures successful learning • Makes the first experience with the subject as positive as possible • Positively confronts the possible erroneous beliefs, expectations, and assumptions that may underlie a negative learner attitude • Associates the learner with other learners who are enthusiastic about the subject • Encourages the learner • Promotes the learner’s personal control of the learning context • Helps learners to attribute their success to their ability and effort • Helps learners to understand that effort and persistence can overcome any obstacles when learning tasks are suitable to their ability • Makes the learning goal as clear as possible • Makes evaluation criteria as clear as possible • Uses models learners can relate to when demonstrating expected learning • Announces the expected amount of time needed for study and practice for successful learning • Uses goal-setting methods • Uses contracting methods
DESIRABLE SEQUENCE CHARACTERISTICS ASSOCIATED WITH FIVE TYPES OF LEARNING OUTCOME We have been discussing the learner and some of the issues and concerns of the instructional process. In Table 1.4, we summarize some of the desirable sequence characteristics, so that the instruction may be fruitful and appreciated by the participant. In conjunction with the desirable sequence, there is also a decision cycle of training that must be developed. In the case of Six Sigma, the decision is pretty straightforward, but let us summarize some key points of the general process.
SL316XCh01Frame Page 21 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
21
TABLE 1.4 Desirable sequence characteristics associated with five types of learning outcome Type of learning outcome Motor Skills
Verbal Information
Intellectual Skills
Attitudes
Cognitive Strategies
Major principles of sequencing
Related sequence factors
Provide intensive practice on skills of critical importance and practice on total skill. For major subtopics, order of presentation is not important. Individual facts should be preceded or accompanied by meaningful context. Presentation of learning situation for each new skill should be preceded by prior mastery of subordinate skills. Establishment of respect for source as an initial step. Choice situations should be preceded by mastery of any skills involved in these choices. Problem situations should contain previously acquired intellectual skills.
First, learn the executive routine (rule). Prior learning of necessary intellectual skills involved in reading, listening, etc. is usually assumed.
Information relevant to the learning of each new skill should be previously learned or presented in instructions. Information relevant to choice behavior should be previously learned or presented in instructions. Information relevant to solution of problems should be previously learned or presented in instructions.
Table 1.5 shows the decision cycle, and Table 1.6 shows the different routes of payoff to the organization. Again, these two tables are shown here so that the reader may appreciate the complexity of training in deciding what is proper and what the payoff is to the organization. Specifically, under Six Sigma, the decision is generally made by top executives in the organization, and the payoff is hoped to be demonstrated in increased customer satisfaction and profitability for the organization. Perhaps one of the most contested topics in training for the last several years has been the effectiveness of training. That means that as training progresses and draws to a close, the question of whether or not the training is meeting or has met the objectives or was beneficial and to what degree is asked. There are two basic evaluations. The first one is the formative, which is an ongoing evaluation of the training to ensure that everything fulfills the objectives. It is conducted during the development of the training. The second is the summative evaluation, which is performed at the end of the training and focuses on whether or not the objectives were met. Obviously, there are many ways to evaluate the training, but the classic evaluation is Kirkpatrick’s Hierarchy of Evaluation model. What Kirkpatrick did was to separate the various outputs of training and evaluate them separately. Level 1 is the weakest, for it evaluates based on perception of “likes” and “dislikes.” In other words, it focuses on learner reactions. Level 2 focuses on learning, and level
SL316XCh01Frame Page 22 Monday, September 30, 2002 8:16 PM
22
Six Sigma and Beyond: The Implementation Process
TABLE 1.5 Decision cycle The logical steps
Some key decisions
Goals for HRD that will be worthwhile to the organization are established
Is there a worthwhile problem or opportunity to be addressed? Is the problem worth solving or addressing? What organizational benefits could HRD produce? Can HRD help? Is HRD the best solution? Who should receive HRD? What SKA are needed? What learning processes will best produce needed SKA? Is a design already available? Can an effective design be created? Is it likely to work? What is really happening? Has the design been installed as planned? Is it working? What problems are occurring? What changes should be made? Who has and has not acquired SKA? What else was learned? Are SKA sufficient to enable on-the-job usage? Have HRD effects lasted? Who is using new SKA? Which SKA are and are not being used? How are SKA being used? How well are SKA being used? What benefits are occurring? What benefits are not occurring? Are any problems occurring because of new SKA use or nonuse? Should HRD be continued? Should less be done? More? Are revisions needed? Was it worth it?
A workable program design is created
A program design is implemented and made to work
Recipients exit with new SKA; enough HRD has taken place Recipients use new SKA on the job or in personal life; reactions to HRD are sustained
Usage of SKA benefits the organization; original HRD needs are sufficiently diminished
3 focuses on job application. On the other hand, level 4 is the strongest and most difficult to perform; it is also the most valuable. Evaluation is based on the objectives of the training in relation to results. It focuses on the following questions. • Should I conduct a level 4 evaluation? (An issue of cost and effectiveness.) • Is a level 4 study feasible? (How would I go about validating and correlating the data of training and implemented benefit?) • Which design should I use? (Specifically, what do I want to measure?) • What will the training cost? • How will I analyze the data? • How will I report the results?
SL316XCh01Frame Page 23 Monday, September 30, 2002 8:16 PM
Understanding the Learner and Instruction
23
TABLE 1.6 Different routes to organizational payoff Training intervention Safety training
Conflict resolution
Six Sigma training Black belts Green belts FMEA
New SKA or reactions Awareness of and skill in following safety procedures Skill and knowledge in methods
Appropriate level of skill and knowledge in tools and methodology Skill and knowledge in the construction and analysis of the FMEA
Mistake proofing
Skill and knowledge in the method of mistake proofing
Project management
Skill and knowledge in the theory and application of project management
Behavior change
Benefits to organization
Greater adherence to procedures
Reduced injuries and lost time
Use of techniques when called for
Reduce conflict in the workplace; increase productivity, morale, and commitment to organization Improve customer satisfaction and profitability
Use techniques when called for
Use the FMEA as a preventive tool to improve design and process Use mistake-proofing approaches to eliminate defects Use project management methodology to improve budgets, delivery and scheduling
Reduce design and process defects
Reduction of waste through appropriate mistake-proofing devices, and controls Reduce problems due to scheduling, budget, and delivery of specific projects and or products
A summary of that classification is shown in Table 1.7. Kirkpatrick’s evaluation is also known as the four-level evaluation model. In the case of Six Sigma the outputs would be: Level 1: Were the participants satisfied with the training? (material, instructor, environment, expectations, etc.) Level 2: Can the participants demonstrate knowledge of what they learned? (In the Six Sigma methodology, this is measured by the progress toward the objective.) Level 3: Are the skills of the Six Sigma methodology used beyond the specific assigned projects? Level 4: Is customer satisfaction and profitability better off after the training? (For most of the training, this level is the most difficult to measure. However, for Six Sigma, it should be very easy since from the beginning this correlation was the driving force of the project itself.)
SL316XCh01Frame Page 24 Monday, September 30, 2002 8:16 PM
24
Six Sigma and Beyond: The Implementation Process
TABLE 1.7 Kirkpatrick’s evaluation with several examples Levels of evaluation
Job training
Nutrition education
Adult literacy
Level 4: Results (community or organizational impact) Level 3: Behavior (Transference of skills) Level 2: Learning (demonstration of learning) Level 1: Reaction (general evaluation)
Does output rise?
Do hospital admissions fall?
Does public library usage increase?
Are skills used in work? Do learners demonstrate their acquisition of skills? Do learners express their satisfaction with the overall program?
Do food purchasing habits change? Do participants show knowledge of good diet? Do participants express their satisfaction with the program?
Do learners read at home? Do learners show mastery of reading and writing skills? Do participants of the training express their satisfaction with the program?
REFERENCES Briggs, L. J. (Ed.) (1977). Instructional design: principles and applications. Educational Technology Publications. Englewood Cliffs, NJ. Brookfield, S. D. (1986). Understanding and facilitating adult learning. Jossey-Bass Publishers. San Francisco. Cross, P. (1981). Adults as learners. Jossey-Bass Publishers. San Francisco. Dick, W. and L. Carey (1978). The systematic design of instruction. Scott, Foresman and Company. Glenview, IL. Gagne, R. M. and L. J. Briggs (1979). Principles of instructional design. 2nd ed. Holt, Reinhart and Winston. New York. Gagne, R. M. (1977). The conditions of learning. 3rd ed. Holt, Reinhart and Winston. New York. Knox, A. B. (1986). Helping adults learn. Jossey-Bass Publishers. San Francisco. Reigeluth, C. M. (Ed.) (1987). Instructional theories in action: lessons illustrating selected theories and models. Lawrence Erlbaum Associates. Hillsdale, NJ. Reigeluth, C. M. (Ed.) (1983). Instructional design theories and models: an overview of their current status. Lawrence Erlbaum Associates. Hillsdale, NJ. Richey, R. (1986). The theoretical and conceptual bases of instructional design. Kogan Page. London. Seels, B. B. and R. Richey (1994). Instructional technology: the definition and domains of the field. Association for Educational Communications and Technology. Washington, D.C. Travers, R. M. W. (1982). Essentials of learning: the new cognitive learning for students of education. 5th ed. Macmillan Publishing Co. New York. Wlodkowski, R. J. (1985). Enhancing adult motivation to learn. Jossey-Bass Publishers. San Francisco. Wonder, J. and P. Donovan (1984). Whole brain thinking: working from both sides of the brain to achieve peak job performance. William Morrow and Company, New York.
SL316XCh02Frame Page 25 Monday, September 30, 2002 8:15 PM
2
Front-End Analysis INTRODUCTION
The purpose of front-end analysis (FEA) is to find the most effective way to stimulate needed individual or organizational change. FEA is the first step in the instructional systems design (ISD) process, because it is critically important to become clearly aware of three major things: • WHAT problem to solve (in the case of a problem solving FEA) • WHAT new goals or directions to set (in the case of a planning FEA) • HOW to achieve each of these most effectively Traditionally, ISD has been applied to problems or gaps in performance that need solving. Performance gaps arise from differences between actual and desired performance. The intent is always to close the gap between the two. FEA helps to locate any gaps between actual and desired performance. Your goal should be to reach desired performance levels. First, however, you must identify where you are. Then, you need to identify where you need to go. Finally, you must decide how to get there. FEA addresses all these issues. FEA does not, however, assume the gap or problem is related to “training” or the solution related to “instruction.” In fact, when using FEA you may find the problem is unrelated to instruction! Only when your problem can be solved using instruction or job aids will you design and develop instructional materials. FEA will clearly define for you such cases, saving you the costs of developing unnecessary instruction. Generally, there are two different types of FEA. The first, and most common, FEA is the problem-solving FEA. It deals with finding performance gaps and their causes and solutions. In addition, it is a somewhat focused, short-term problem-solving approach that is used to isolate and address gaps that are the source of organizational problems. As such, it is very similar to the early stages of the Global Problem Solving Process, for which see volume 2 of this series. FEA can also be used to identify and plan for completely new organizational goals and directions. This is in contrast to merely fixing existing systems. The planning FEA takes a systems approach to bringing about organizational change and in that way is similar to process improvement. It focuses on improvement of basically sound systems, rather than on short-term problem settlement. As such, the steps in the planning FEA duplicate the first steps in process improvement: 1) identify the opportunity and 2) define the scope (including stakeholders).
25
SL316XCh02Frame Page 26 Monday, September 30, 2002 8:15 PM
26
Six Sigma and Beyond: The Implementation Process
In the case of Six Sigma, we are more interested in the second approach, since we are about to embark on a completely new organizational directive. Thus, we are interested in determining our needs early so as to plan for future opportunities. Need can be addressed by using either the following problem-solving FEA steps or the process improvement methodology. Since you are at the very beginning of the ISD process for the Six Sigma methodology, no formal preparation is required at this point. However, you do need to make a commitment to following systematic procedures. This is a definite departure from the more unstructured approach serving many instructional programs. Prepare yourself by eliminating any assumptions you may have about the problem. Find out whether instruction is the answer using FEA. In FEA, a commitment to following systematic procedures means not assuming a training problem but attempting to verify what type of problem really exists.
PROBLEM-SOLVING FRONT-END ANALYSIS The steps for conducting a problem-solving FEA include the following: 1) identify the problem, 2) identify potential and actual causes, 3) identify potential solutions, 4) choose the best solutions, and 5) report your findings. 1) Identify the problem: in a problem-solving FEA, you are trying to locate and remedy gaps between actual and desired performance. Collect data on gaps using the techniques described in Table 2.1. Be sure to gather information from a variety of perspectives (e.g., job incumbents, supervisors, customers, etc.) to limit collection of biased information. In addition, examine current operations and current performance levels and define desired performance levels. The difference between the two is your gap or “problem” area. (NOTE: desired performance levels should be based on similar, “best-in-class” in-house operations.) Or, benchmark to similar operations outside the company. In defining the problem, aim to be as specific as possible. Focus on the who, what, where, and how often of the problem. Assign dollar values. For example, instead of stating “parts are being rejected too often” as the problem, be more specific: PROBLEM Rejection of parts from Line 5. This problem has occurred daily over the past 6 weeks, costing an estimated $7000 per week. Notice a dollar value has been assigned to the problem. In addition, you know where the problem is, what is happening, and how often. This allows you to assess whether the problem is worth further analysis. When defining the problem, also consider the following: • Have you thoroughly identified all gaps in performance? • What are the specific differences between actual and desired performance?
SL316XCh02Frame Page 27 Monday, September 30, 2002 8:15 PM
Front-End Analysis
27
TABLE 2.1 Data collection techniques Technique
Definition
ADVISORY GROUPS BRAINSTORMING
Subject matter experts brought together to discuss various issues. Small group discussions formed to generate ideas about a particular topic. Rules of discussion include: openness to each other’s ideas, encouragement of far-fetched ideas, the more ideas the better, and zero negative evaluations. Reports covering events that led to a particular event or problem. These reports offer facts, as opposed to opinions, about what happened. A way of gathering information through a type of mall survey. Participants express opinions about a problem or opportunity. Opinions are collated to form a majority opinion list, which is redistributed through a series of mailings for reprioritization. Individuals brought together to discuss a particular issue. The purpose is to discover attitudes, ideas, possible barriers, etc. Face-to-face question-and-answer discussions among a group of individuals. Group interviews cost less than individual interviews, but allow for less depth in examining opinions and attitudes. Face-to-face question-and-answer discussions using preset questions. Individual interviews allow for in-depth examination of opinions and attitudes but are costly. This method is comparable to a Delphi Method but in a group setting. Individuals write down and share in turn their opinions about problems, their causes, and solutions. The group rank orders opinions according to validity, etc. A way to examine on-the-job behaviors using a preset checklist. The observer must be given direction on who, what, and how to observe. Observations can provide a wealth of information about what actually is occurring on the job. A series of questions sent to a number of individuals seeking information on opinions, attitudes, facts, etc. about problems, causes, and potential solutions. Questionnaires take time to develop yet can reach many people in a short period of time. Questionnaires cost less than the interview or observation method, but may not delve deeply into any one area. All questions must relate to the information being sought.
CRITICAL INCIDENT REPORTS DELPHI METHOD
FOCUS GROUPS INTERVIEWS (GROUP) INTERVIEWS (INDIVIDUAL) NOMINAL GROUP METHOD
OBSERVATION
QUESTIONNAIRES
• Have you attempted to break down complex gaps? • Have you identified the jobs, operations, employees, etc. involved in the problem? • Have you identified problem locations? Have you determined when the problem occurs? • Do you know the problem’s impact? • Did you gather enough information from enough people to give you insight into the problem? Do you understand the culture in which the problem exists?
SL316XCh02Frame Page 28 Monday, September 30, 2002 8:15 PM
28
Six Sigma and Beyond: The Implementation Process
TABLE 2.2 Contributing factors to problems Contributing Factors to Problems TECHNICAL/WORK ENVIRONMENT
ORGANIZATION
INDIVIDUAL
SUPERVISION
Examine Tools, equipment, material, work distractions Workload distribution Temperature, illumination, ventilation General environment Technical inputs (engineering, systems, etc.) Standards, policies, practices, values Use of “systems” thinking Relationships (social, political, economic, employee, customer, supplier, etc.) Interpersonal skills (teamwork, handling personality conflicts and communication problems, flexibility, cooperation, agreement with organizational goals, etc.) Skill and knowledge (knowledge of basic facts, concepts, principles, strategies, etc.) Interpersonal skills Skill and knowledge Standards, policies, procedures Management skills (objective setting, team building, leadership, time and stress management, etc.) Support skills (recognition, feedback, reinforcement, modeling, mentoring, motivation)
In addition, determine if the problem is random or continuous. For example, if the problem occurs regularly, it may continue due to some cause in the organizational system. This is a continuous problem. In such cases, an FEA becomes a valuable problem-solving tool. If the problem is a one-time event, such as with a random problem, it would not be worth conducting an entire FEA. 2) Identify potential and actual causes: in this step, you need to identify the cause of the problem. What are the contributing factors to the problem? Consider technical, organizational, individual, and supervisory performance (see Table 2.2 for a list of possible contributing factors). Gather this information using the same techniques outlined above, using a variety of techniques and sources to avoid bias. (This step is similar to the who, what, why, where step in the global problem-solving process.) After listing several contributing factors, narrow the list to the most likely causes. Ask for assistance from others who understand the problem and probable causes. For example, suppose a supervisor from one of your manufacturing plants presents you with a problem: there are too many parts being rejected. You need to further define the problem. You then can begin to identify potential and actual causes.
SL316XCh02Frame Page 29 Monday, September 30, 2002 8:15 PM
Front-End Analysis
• • •
3)
Identify the problem: as you attempt to define the problem more precisely, you find the rejection problem is located on Line 5. You also find the problem has occurred over the last 6 weeks, on a daily basis. Thus, it’s not a random problem but a continuous one. You also find the cost of the rejection problem to be an estimated $7,000 per week. Because the problem is continuous and costing large sums, you decide it is worthy of further investigation. Identify potential causes: at this point, you can either guess at the potential causes, ask for opinions, or begin an in-depth investigation. You choose to look in depth at what has happened. You begin interviewing all those involved with the problem. You find the following: • The in-plant handling of raw materials parts appears inadequate. Raw materials are haphazardly tossed into bins. • The raw material supplier does not seem to be meeting the product blueprints given to them by Purchasing. • The manufacturing process does not seem to support the product blueprints. Identify actual causes: Because you have gathered “opinions” vs. facts, you now need to confirm which opinions are accurate. This will lead you to the actual causes. You find the following: First, a random sample of raw materials is examined before entering Line 5. You find raw materials are passable. Second, you find the supplier’s product meets the product blueprints given to them by Purchasing. Supplier products are passable. HOWEVER, reports show differences between the manufacturing process and what’s required by the product blueprints. The manufacturing process must support the product blueprints if a passable product is expected. For example, machines must be tooled to meet product blueprints, etc. You find this is not happening. You have located the actual cause. You may need to search further, however, for “less immediate” causes. Searching further into the actual cause, you find the department creating the product blueprints (Product Engineering) and the department creating the manufacturing process blueprints (Process Engineering) do not meet or communicate on a regular basis. Thus, there is no assurance that the manufacturing process will support the product blueprints. You also find that neither department operates on a “team basis;” they are not used to working together with other departments on a consistent, proactive basis. Identify potential solutions: after defining the problem and finding related causes, you now need to solve the problem. Gather potential solutions through the same data collection procedures outlined earlier (in practice, you may collect data on all these questions at the same time). Often, solutions will flow directly from knowledge of causes. For example, continuing with the earlier problem, how would you solve it? Identify solutions: essentially, you have two different types of causes requiring two different solutions:
29
SL316XCh02Frame Page 30 Monday, September 30, 2002 8:15 PM
30
Six Sigma and Beyond: The Implementation Process
• The immediate cause needing a solution is the difference between the product blueprints and the manufacturing process. Product blueprints and manufacturing processes must be compatible. You find this can be solved either by making product or manufacturing changes or by redesigning the part from scratch. You would need to perform a cost-benefit analysis on each potential solution before making a decision. • The less-immediate cause needing a solution is the communication problem between the departments. (Yet this must be solved if future problems are to be eliminated.) Two solutions are available. First, biweekly meetings between department heads can be established. Second, each department can partake in a “team-building” instructional program. Such a program would foster open communication between other departments. You would need to perform a cost-benefit analysis on each potential solution before making a decision. 4) Choose the best solutions: how can you solve the problem? Can the problem be solved using education or training, or is something else needed? When choosing the best solution or solutions, you need to determine the suitability of each. Are the solutions realistic and affordable? Do they match the problem? Which is best? Consider the following factors when evaluating the feasibility of the solutions using a typical cost-benefit analysis: • Cost of the problem • Cost of the solution • How well the solution will solve the problem • Whether customers and stakeholders will accept and support the solution • Whether the solution is acceptable to the organization’s culture • Whether the cost and time required fit the available resources • Whether there are barriers to implementing the solution, including delineation of each barrier. • Whether the solution is consistent with long-term objectives and continuous improvement • Potential return on investment By comparing potential solutions against these criteria you will be using cost-benefit analysis to link your solution to strategic business goals. Frequently, instruction will not be the optimal solution. In our example, the immediate cause did not require instruction. Yet the less immediate cause could require education or training. FEA can enhance the professional image of education and training. When instructional solutions are used only when supported by data, the results will be far more positive. Too often, instructional programs are “thrown” at problems when the best solution lies in another area. Choose the best solutions: based on your cost-benefit analysis, you can choose the best possible solution. You may find a solution is not feasible because of limited funds, timing, potential barriers, etc. When this happens, you may need to modify your chosen solutions to fit your constraints.
SL316XCh02Frame Page 31 Monday, September 30, 2002 8:15 PM
Front-End Analysis
31
TABLE 2.3 Front end analysis report information Title
Specifics
INTRODUCTION
Describe how and why the FEA process began. Describe procedures used to locate and confirm flaps, causes, and solutions. Discuss flaps identified, causes, operating consequences, personnel, jobs, and costs involved. Describe solutions requiring no action, action involving instruction or job aids, and action not involving instruction or job aids. Compare solutions and discuss problems associated with each solution. Detail chosen solution and how to measure or determine success. Give rationale for choice. Identify population, jobs, and costs involved. Discuss relationship to organizational objectives and benefits. Describe how the solutions will support continuous improvement objectives. Describe scope of project. Identify constraints, required resources, and estimated schedule. Identify customer and client relationships. Suggest measures or definitions of “success.” Back up correspondence, budgets, data gathering tools, raw data, outside sources, etc.
FINDINGS POSSIBLE SOLUTIONS
RECOMMENDED SOLUTIONS
CONTINUOUS IMPROVEMENT PROTECT SCOPE AND SCHEDULE
APPENDICES
The worst scenario would be having to develop new solutions because of constraints found in the cost-benefit analysis. 5) Report your findings: report the information from your FEA in a report that includes the information shown in Table 2.3. Proceed to the next step in the ISD process, task analysis, only if your analysis has shown instruction to be an appropriate solution. Otherwise, explore other interventions. These could include organizational restructuring, organizational development, etc. After you have completed your FEA, evaluate the quality of your efforts by using a formative evaluation. A typical check list for such an activity is shown in Table 2.4.
TASK ANALYSIS Task analysis is performed when trying to determine what you want out of — and want to put into — an instructional program. Task analysis data become the foundation for the entire ISD process.
SL316XCh02Frame Page 32 Monday, September 30, 2002 8:15 PM
32
Six Sigma and Beyond: The Implementation Process
TABLE 2.4 Front end analysis formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your FEA process needs improvement!
Y
N
1. Does your situation call for a problem-solving FEA? (If so, you should have followed the steps outlined in this phase.) 2. Does your situation call for a planning FEA? (If so, you should have followed process improvement methodology.) 3. Have you used a variety of information-gathering techniques and sources to identify your performance gaps or problems? 4. Have you defined the problem as specifically as possible, focusing on observable, measurable outcomes with assignable dollar values when possible? 5. Have you established whether the problem is random or systematic? 6. Have you gathered information to identify potential and actual causes of performance gaps? 7. Have you generated a comprehensive list of potential solutions to the problem? 8. Have you systematically evaluated potential solutions, using a cost-benefit approach, to select the most appropriate ones? 9. Have you summarized your FEA information in a report?
Task analysis will identify everything someone would need to expertly perform a particular job, skill, or function. For example, consider the job of changing a tire. Task analysis would identify all the steps involved in changing a tire. This would include who performs what and when, using what tools, and under what conditions. Task analysis data becomes vital when deciding what other performers should know if they, too, are to become “experts.” Essentially, this is how instructional content is formed. In the case of Six Sigma methodology, this is a very critical stage in the process because, depending on what level the instruction is for, the requirements will change quite drastically. This will be discussed in greater detail in Part II of this volume. Only when your FEA has indicated a need for instruction, and only after management has approved of the instruction, is task analysis started. Task analysis is a highly structured process and can often be time-consuming and expensive. To move from FEA to task analysis make sure the appropriate preparation has taken place. A good rule of thumb is to review the following parts of the FEA. • Confirm that instruction is an appropriate, cost-effective solution to the identified problem. In the majority of cases, instruction alone will not solve an organizational problem. For example, instruction may not address motivational or organizational issues. This may require not only design and development of education and training but also design and development of
SL316XCh02Frame Page 33 Monday, September 30, 2002 8:15 PM
Front-End Analysis
33
organizational development programs. In our case, we will address issues and concerns that deal with the training portion of Six Sigma diffusion in the organization. • Review data gathered from all sources about the nature of the problem. This includes contributing factors, causes, and solutions. You’ll use this information when developing objectives and content, choosing delivery methods, and measuring whether or not learning has occurred. (Keep in mind that the requirements for executives, champion, MBBs, BBs and GBs are not the same; they must be treated differently.)
STEPS IN TASK ANALYSIS Traditionally, the steps for conducting any task analysis have been to: 1) analyze your audience, 2) collect task data, 3) develop instructional objectives, 4) classify objectives by storage medium, and 5) develop assessment instruments. The same steps are applicable in Six Sigma methodology. 1) Analyze your audience: analyzing your audience gives you the information you need to tailor an instructional program to a particular audience. A major goal of creating any instructional program is to make sure the audience can understand, accept, and feel comfortable with the learning experience. For example, things like reading level, previous experience, and skill and knowledge level will impact an audience’s reactions and degree of learning. You can control for this by becoming as familiar as possible with your audience and planning accordingly. If you know your audience, you can provide content, materials, examples, and instructional experiences with which your audience can closely identify. You can also use audience information to set design standards and baseline program requirements. For example, if the majority of your audience is at a ninth grade reading level, you can design your program and set entrance requirements to that level. Thereafter, those who are not up to a ninth grade level would need a prerequisite class. Those beyond a ninth grade level might need to take a higher-level course. On the other hand, if you expect all your participants to be graduate engineers or statisticians, the requirements would clearly change drastically, not only in the prerequisites domain but also in the instructional characterization of the material. Gather audience information from personnel records, surveys, etc. (This is very important in the case of figuring out the content of the overview and Green Belt training.) Focus on group, rather than individual characteristics, maintaining privacy of individuals. Pay particular attention to these characteristics: • Demographics: age, gender, culture (such as ethnicity and socioeconomic background), homogeneity of group • Capacity: intellect, physical development
SL316XCh02Frame Page 34 Monday, September 30, 2002 8:15 PM
34
Six Sigma and Beyond: The Implementation Process
• Competence: prior skills and training, experiential background, reading ability, languages spoken, current skill and knowledge level (relative to the instruction program), level within the organization • Attitudes: values (toward training, subject), self-concept (academic, personal, professional) • Motivation (goals, interests, perseverance) Gather all task analysis information on two audiences: primary and secondary. The primary audience consists of those going through the instruction or using the job aid. The secondary audience includes anyone whose support is necessary for successful performance by the primary audience. (This is also significant because the results of this analysis should dictate, among other requirements, who is going to be trained as a Black Belt or a Green Belt.) Support from the secondary audience (Green Belts, in this case) is vital to achieving transfer of learning and organizational results (e.g., productivity). The best-designed instruction or job aids alone will not guarantee transfer or changes in the bottom line. However, these things can be enhanced through secondary audience support. Generally, the secondary audience requires some instruction about what the primary audience has learned. They need to understand the value and benefits of the instruction — both to the primary and secondary audiences. This is why it is strongly recommended that the cascading training to the Green Belts should be done by the Black Belts. The support or secondary audience usually includes the employees’ supervisors as well as anyone whose work is related to or influenced by the primary audience performance. For example, if supervisors are being trained to manage in a more participative manner, their employees must be equipped to take on more responsible roles. In the case of Six Sigma methodology, for example, it would be ludicrous to assign a DOE responsibility to an operator if the operator has no idea of what DOE is or what to do. 2) Collect Task Data: two important benefits arise from collecting task data: • All the tasks required to perform expertly a particular skill, job, or function are identified • A sequence of instruction is determined A task is a series of sequenced actions leading to a desired outcome. Outcomes include broad instructional goals like giving a presentation, building a car, or changing a tire. Within ISD, outcomes are referred to as terminal objectives. Terminal objectives come directly from the front end analysis. Essentially, terminal objectives are the desired behaviors needed to solve the problem. The question you need to address is, What does an individual need to do in order to reach the desired outcome? (For example, what does an individual need to do to successfully change a tire? Or what does the Black Belt need to know to approach a project, solve it, and present the results to management?)
SL316XCh02Frame Page 35 Monday, September 30, 2002 8:15 PM
Front-End Analysis
To answer these types of questions you need to locate all the tasks leading to your desired outcome — or to one broad, terminal objective (i.e., changing a tire, necessary knowledge of a Black Belt). Tasks include major tasks, which are refined and broken into subtasks, subsubtasks, etc. In the example of changing a tire, the following hierarchy may be developed with major, subtasks, and subsubtasks in a sequence. • Terminal objective: change tire • Major task: secure car • Subtasks: set transmission; set parking brake; block wheel • Subsubtasks: is the car automatic? If yes, put in park then move to subtask of setting the transmission; if transmission is manual, then put it in gear (first gear) and proceed to setting transmission in the subtask. Once you locate and sequence all these tasks, this information can be used to create your instruction. Where do you go for task data? Sources for collecting task data vary. Use workplace sources and processes such as these: • Interviews with accomplished performers, subject matter experts, etc. • Administrative checklists and flowcharts • Locally-constructed job aids • Manufacturer suggestions and documents • Observation of tasks being performed • Process sheets • Quality deployment sheets • Research literature (periodicals, etc.) • Surveys • Tests • Facilitating or focus groups using brainstorming, Delphi method, nominal method, etc. • Critical incident reports When collecting task data, consider using Table 2.5 as a guide. If you gather all this information on each task step, you will end up with a thorough knowledge base about each task. You will use this information to specify conditions and standards for your instructional objectives (discussed shortly). In addition, this information will become invaluable during later ISD stages (i.e., design and development). Remember, you need to use “variety” when collecting task data. This means gathering information from as many different sources and processes as possible. This will help minimize the risks associated with misinterpretations or individual biases. 3) Develop instructional objectives: task data tells you what an individual needs to do to reach a particular goal or outcome. Now you need to think in terms of what instruction is needed and how to help the learner reach the desired goal. Instructional objectives translate task data into required learner behavior. They are vital to the design and development phases. Without instructional objectives, you will not know what specifics to put into your instruction.
35
SL316XCh02Frame Page 36 Monday, September 30, 2002 8:15 PM
36
Six Sigma and Beyond: The Implementation Process
TABLE 2.5 Information about essential tasks Find out about
Ask these questions
PREREQUISITE SKILLS AND KNOWLEDGE TASK IMPORTANCE
What previously learned skills and knowledge must be present in order for the learner to understand the instruction? How critical is the task to operations? What happens if task is not performed? When is the task performed? What is the trigger event? Look for cues, signals, indications for action or reaction. What is the concluding step or event in the task performance? How is successful completion defined? Look for cues, signals, and indications that action taken is correct and adequate. What will happen if improper performance occurs? Are the potential effects expensive or harmful to operations?
INITIATION CONCLUSION SUCCESSFUL COMPLETION CONSEQUENCES OF UNSUCCESSFUL COMPLETION FOLLOW-UP TASKS OTHERS INVOLVED TOOLS, EQUIPMENT, SUPPLIES, ETC. SAFETY CONSIDERATIONS REFERENCE MATERIAL
Are there related tasks that need to be performed after this particular task step? Are other task performers involved? Is a team effort required? Who is the leader? What tools or commodities are used or manipulated for successful performance? Does the task pose any risks to life, limb, equipment, or supplies? Is reference material needed during task performance?
Develop an instructional objective for each terminal objective, major task step, subtask step (if needed), and so on. Each instructional objective has three components (Mager, 1984; 1984a): • The desired, observable task to be performed • The standards by which the task accomplishment will be measured or evaluated for successful achievement • The conditions or circumstances under which the task is performed For example, an instructional objective for loosening wheel nuts when changing a tire is shown below: • TASK: the user shall loosen the wheel nuts and raise the flat tire above the ground, • STANDARDS: in ten minutes, without assistance, using appropriate safety procedures, without personal injury or damage to the vehicle, • CONDITION: given a vehicle with a flat tire, jack and handle, wheel lug nut wrench, gloves, block, and operator’s manual, under any road conditions. Do you see how this information would help in making your instruction specific? You set the stage for the best way of learning or teaching a specific task. How you write your instructional objectives depends on the type of behavior you want the learner to demonstrate. For example, three
SL316XCh02Frame Page 37 Monday, September 30, 2002 8:15 PM
Front-End Analysis
common types of desired learner behavior include cognitive (knowledge), affective (attitude), and psychomotor (performance). In Six Sigma methodology, we have all of them. However, the predominant one is the cognitive. When writing an instructional objective asking for cognitive (knowledge) skills, use action verbs such as recall, identify, classify, analyze, explain. For affective (attitude) behavior, use verbs such as choose, select, approve. For psychomotor (performance) behavior, use verbs such as loosen, locate, secure, change, move. In the example above, we wanted the learner to perform a behavior, versus just recalling how it was done. Thus, the learner was asked to loosen the wheel nut —a behavior task. 4) Classify objectives by storage medium: often, you will find your instructional objectives do not require that a learner attend an instructional program. Sometimes, all the learner needs is a job aid. Therefore, it is best to separate your instructional objectives according to what is called “storage medium.” Information can either be stored in the learner’s memory or in on-the-job reference materials, such as job aids. In the case of Six Sigma, a typical “memory” item is the history of the Six Sigma methodology, whereas the formula for the normal distribution is a candidate for a job aid storage medium. In general, only some types of information should be stored in memory. Not everything a person learns is remembered or stored properly. This is in contrast to a computer, which can store and retrieve enormous amounts of data. Job aids work in the same manner. Large quantities of information can be more effectively stored in job aids. In addition, job aids often cost far less than instruction. In the tire-changing example, you would probably want the learner to remember safe practices. The location of tools required and specific steps might be contained in a job aid such as the operator’s manual. A job aid is especially appropriate since these details might vary across vehicles. 5) Develop assessment instruments: assessment instruments (tests) measure how much learners know before and after education and training. They are developed during task analysis to assure a match between the assessment and the instructional objectives. Too often, assessments are developed late in the ISD process. When this happens, objectives are frequently “lost.” The assessment ends up measuring something other than what was initially intended. The same assessment instrument is normally used for the before (pre) and after (post) assessment. There are several reasons for using assessment instruments: • To identify any lack of prerequisite skills or knowledge. • To identify accomplished performers. Qualified individuals should not consume organizational resources by attending instruction on tasks they can already perform with proficiency. • To measure gains in knowledge and skill, by comparing pre- and post-assessments. This can be done both for individual learners and for groups.
37
SL316XCh02Frame Page 38 Monday, September 30, 2002 8:15 PM
38
Six Sigma and Beyond: The Implementation Process
TABLE 2.6 Task analysis formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your task analysis needs improvement!
Y
N
1. Have you identified important characteristics of both your primary and secondary audiences? 2. Have you collected essential task information from a variety of sources? 3. Have you developed instructional objectives that state desired task accomplishment, conditions under which task performance occurs, and standards by which performance will be evaluated? 4. Have you classified objectives according to whether they will be addressed by instruction or job aids? 5. Have you developed the necessary assessment instruments to evaluate whether or not your objectives have been met?
The learning outcomes desired from education and training should follow a 90/90 rule. This requires that 90% of the learners learn 90% of the instructional material. The goal of the assessment instrument is to measure such accomplishment. If through assessment you find that learning is less than 90/90, you need to assess why. Perhaps the product needs modification. Or, you may find learner remediation is required until the learner reaches a 90/90 level. Another reason for developing assessment instruments during task analysis is to develop expectations for program development. Sponsors and champions need to accept the business objectives, organizational goals, and standards set for instructional outcomes. Assessment instruments need to be valid (contain accurate content; a good validity is considered to be anything over 80%) and reliable (contain repeatability; a good repeatability for assessment instruments is anything over 78%). Thus, if you want to know whether a given learner can loosen wheel nuts, the test should seek this specific information. A poorly constructed test may measure something completely different than what is needed. This is not fair to the learner, and it will not help you accurately measure what is really going on. You may wish to seek professional assistance in developing assessment instruments. In the case of Six Sigma training, we know of no instance where tests are given during or after completion of training. However, this may be because the expected outcome is to deliver “a solved problem” to management. When this happens, it is assumed that the training was successful. It must also be mentioned that there is a movement to certify the Black Belts and Master Black Belts through some kind of testing. It is hypocritical to push for such certification since, as has already been mentioned, there is no agreed-upon body of knowledge. The certification, in all cases as of this writing, is useless because there is no agreement as to what a Black Belt
SL316XCh02Frame Page 39 Monday, September 30, 2002 8:15 PM
Front-End Analysis
39
or a Master Black Belt should know. Furthermore, not all available training is consistent with a “set” of specific knowledge. For more information, see Chapter 14. After you have completed your task analysis, evaluate the quality of your efforts by using a formative evaluation checklist such as the one shown in Table 2.6.
REFERENCES Mager, R. F. (1984). Goal analysis. 2nd ed. Belmont, CA: David S. Lake Publishers. Mager, R. F. (1984a). Preparing instructional objectives. Rev. 2nd ed. Belmont, CA: David S. Lake Publishers.
SELECTED BIBLIOGRAPHY Birnbrauer, H. (Ed.) (1985). The ASTD Handbook for Technical and Skills Training. Alexandria, VA: American Society for Training and Development. Bloom, B. S. (Ed.) (1954). Taxonomy of Educational Objectives. Book 1, Cognitive Domain. New York: Longman. Craig, R. L. (1987). Training and Development Handbook. 3rd ed. New York: McGraw-Hill Book Company. Hanafin, M. J. and Peck, K. L. (1988). The Design, Development, and Evaluation of Instructional Software. New York: Macmillan Publishing Company. Hannum, W. and Hansen, C. (1989). Instructional Systems Development in Large Organizations. Englewood Cliffs, NJ: Educational Technology Publications. Kearsley, G. (1982). Costs Benefits and Productivity in Training Systems. Reading, MA: Addison-Wesley Publishing Co. Rossett, A. (1987). Training Needs Assessment. Englewood Cliffs, NJ: Educational Technology Publications.
SL316XCh02Frame Page 40 Monday, September 30, 2002 8:15 PM
SL316XCh03Frame Page 41 Monday, September 30, 2002 8:15 PM
3
Design of Instruction
“Winging it” in a training course may be challenging, but don’t count on reaching high levels of success. What you need is a blueprint — some concise action plan. The objective of such an action plan is to assure across-the-board, quality results. Design of instruction is your action plan for building quality products. Before attempting to design instruction, you need to be aware of how learners retain information. Recall from task analysis that information can be stored either in memory, in a job aid, or in a combination of the two. Some information must be stored in memory as opposed to a job aid. Storing information in memory is best facilitated using instruction, where learners attend a class, work through a computer-based instruction program, etc. The development of instruction evolves through a design phase. Design of instruction focuses on finding the best ways to move information into a learner’s memory for later recall and use. How do you go about designing instruction? The first step is developing an instructional plan, comparable to a construction blueprint. It requires data from the front-end and task analyses.
PREPARATION • Decide whether instruction is appropriate. Review Chapter Two for things to consider when deciding whether to use instruction, job aids, or a combination. • Assemble front-end and task analysis information. This includes sequenced instructional objectives, task steps, and audience characteristic information. • Do a product survey. Determine whether a suitable instructional package is already available. If so, you may not need an instructional plan, or you may need one that centers on product revisions.
STEPS IN DESIGN OF INSTRUCTION The activities described in this phase will take you through the phase of designing instruction. Consider at all times what will facilitate learning, retention, and future use of what was learned. There are five basic steps you need to follow. They are: 1) develop content outline and course strategy, 2) choose instructional methods, 3) choose instructional media, 4) choose instructional elements, and 5) plan for remaining ISD phases
41
SL316XCh03Frame Page 42 Monday, September 30, 2002 8:15 PM
42
Six Sigma and Beyond: The Implementation Process
TABLE 3.1 Example of a content outline — changing a tire (terminal objective) I. Secure car. A. Set transmission. 1. If automatic, put gearshift in park. 2. It manual, put gearshift in first. B. Set parking brake. C. Block wheel diagonally opposite. II. Locate equipment. A. Get spare tire. B. Get jack. C. Get wheel nut wrench. III. Change tire. A. Remove wheel covers. 1. If poly cast wheel ornaments, follow a different procedure. B. Loosen wheel lug nuts. 1. If anti-theft wheel lug nuts, follow a different procedure. C. Find jack notch. D. Put jack in jack notch. E. Turn handle of jack clockwise until wheel is off ground. F. Raise tire completely off ground. G. Remove wheel lug nuts. H. And so on
1) Develop content outline and course strategy: what content are you planning to put into your instructional program? Developing a content outline is the first step in formalizing the material you plan to present to your learner. Content outline data come directly from the task analysis. In fact, your task hierarchy can be easily transformed into a content outline. Table 3.1 above is an example of what a content outline might look like for the desired outcome or terminal objective “Changing a tire.” (Objectives and content outline are presented in Part III for each of the Six Sigma level requirements.) In the development of materials phase, you will see how this outline is expanded into a rough draft. From the completed content outline, formulate a course strategy. This includes deciding upon a course title, lessons and modules, etc. For example, using the above outline, the course title could be “How to Change a Tire.” Lessons within the course could include 1) securing the car, 2) locating equipment, and 3) changing the tire. Often, major task steps become the title and subject for each lesson. Now your instructional program is beginning to take shape. See Table 3.2 for a way to condense and summarize this information in an instructional plan.
SL316XCh03Frame Page 43 Monday, September 30, 2002 8:15 PM
Design of Instruction
43
TABLE 3.2 Example of an instructional plan Instructional Plan Course Title
Target Audience Description
Anticipated Number of Participants
Lesson Number
Lesson Title
Instructional Objectives
Methods
Media
Instructional Elements
Other
2) Choose instructional methods: how are you going to present your instructional content (from the content outline) to the learner? And what is the best way for your audience to learn what you want them to learn? Once each content outline is drafted, you can begin choosing “instructional methods.” Instructional methods are ways of communicating your message to the learner. When making choices about what instructional methods to use, consider: Learning principles: these principles are important, proven ways to increase learner attention, retention, understanding, motivation, and transfer of what was learned to the job. Choose a method that promotes interactivity, is appealing to the senses, and promotes acceptance. Refer to Table 3.2 for additional information. Objectives: each instructional objective will direct your instructional methods decisions. The task, standards, and conditions (instructional objective) determine to some degree your choice of instructional method. For example, the objective “Changing a Tire” may be more fully reached using lecture and practice sessions, in comparison to using role-play or case-study methods. Audience: audience characteristics, described in task analysis, will influence your choice of instructional method. For example, one audience may initially need one-on-one instruction to build levels of self-confidence, whereas another may do best on their own, using self-paced instruction. Resources: available resources, as well as other constraints, also impact choice of instructional methods. Determine first the most appropriate instructional method after considering objectives and audience. Can you afford this method? If not, develop a list of other instructional method choices. Prioritize this list and evaluate the cost of each. You will need to compromise between the most appropriate method and affordability.
SL316XCh03Frame Page 44 Monday, September 30, 2002 8:15 PM
44
Six Sigma and Beyond: The Implementation Process
Hannum and Hansen (1989, pp. 145–148) discuss the advantages and disadvantages of instructional methods, which include features such as the following. • Lecture combined with practice sessions • Group discussion, study groups • One-on-one instruction • Self-paced instruction such as programmed learning or computerassisted instruction • Simulation or mock-up situations • Role play • Case study • On-the-job training, fieldwork, internship, research • Field trips • Structured experience You may vary your instructional methods within any one program. For example, you would probably want to use group discussion along with lectures, or one-on-one instruction along with on-the-job training. In addition, remember to choose instructional methods for both your primary and secondary audiences. Always try to condense and summarize your course of action for the lesson in an instructional plan. Based on learning principles, audience, and resource constraints, your task is to match each instructional objective (from task analysis) with the best instructional method choice. Your goal is always to enhance learning and transfer. 3) Choose instructional media: instructional media are “supplemental” ways to present your instruction or job aid to the learner. For example, a lecture may make use of print materials, visual aids, physical objects, etc. Your objective as the designer is to find method and media combinations that will enhance learner attention, retention, understanding, motivation, and use of learning. Refer to Table 3.3 for an analysis of different instructional media. The table identifies and describes several medium to help you in making a decision about instructional media. Table 3.3 describes various types of media you can use with previously chosen instructional methods. Decisions about type of media are based on the same factors as when choosing instructional methods. Keep in mind these additional three points: • There is no perfect medium for all audiences and instructional objectives. For example, reading ability or cultural differences will influence audience response to various media. • The effectiveness of learning from various media is somewhat unrelated to learner preferences. An example is the lack of learning that occurs with a very popular medium — television! • Each instructional medium has its own strengths and weaknesses. For example, when comparing computer-based instruction to a lecture method, the cost of developing computer-based instruction is higher.
SL316XCh03Frame Page 45 Monday, September 30, 2002 8:15 PM
Design of Instruction
45
TABLE 3.3 Types of instructional media Medium
Description
PRINT VISUAL AIDS
Textbooks, workbooks, manuals, programmed texts. Charts, diagrams, graphs, illustrations, drawings, photographs, exhibits, projected images, overheads, slides. Radio, cassettes, reel-to-reel, disc, records. Filmstrips, television, motion pictures, video. Computer-based instruction, computer-supported learning and job aids, computers, interactive video. Tools, equipment, simulated environments. Used to promote interactivity during instructions, presentations. The audience responds to questions using a keypad.
AUDIO AUDIO-VISUAL COMPUTERIZED
PHYSICAL OBJECTS AUDIENCE RESPONSE SYSTEMS
In comparison, however, the cost of delivering live lectures can be extraordinary considering costs of preparation, facilitator, travel, and student time. See Table 3.2 for a way to condense and summarize media choices into an instructional plan. Your task is to match each instructional objective (from task analysis) with the best instructional medium choice. 4) Choose instructional elements: as previously emphasized, your goal — when designing any instruction — is to increase learner attention, retention, understanding, motivation, and transfer. Choice of appropriate method and medium will help you reach this goal. The following instructional elements, however, will also impact the success of your instruction or job aid. Develop specifications for what, when, where, and how each of the following should be added to your instructional content. These specifications will be followed during the actual development of materials phase. • Examples • Drill and practice sessions • Activities • Illustrations • Charts and diagrams • Exhibits • Simulations • Reviews • Summaries • Remediation • Projects • Exercises See Table 3.2 for ways to condense and summarize this information in an instructional plan. Remember, integration of these instructional elements
SL316XCh03Frame Page 46 Monday, September 30, 2002 8:15 PM
46
Six Sigma and Beyond: The Implementation Process
TABLE 3.4 Learning principles Principle
Description
SEQUENCE MATERIAL
Gain learner attention. Inform learner of the objective. Present desired outcome. Demonstrate desired outcome. Ask for performance. Give feedback on performance. Insert questions within materials. Use 70% of time devoted to discussion formats, active practice sessions, and immediate feedback. Use role plays, games, self-discovery exercises, individual and team presentations, and physical activities such as skits or pantomime. Integrate different questioning strategies such as question cards, one-on-one, “ask the wizard,” “pass the hat,” etc. Link participants to one another using group-based activities such as learning games and learning projects. Present only one idea at a time at the appropriate level of difficulty. Use logical sequencing. Plan for a natural, comfortable, relaxed, colorful delivery setting, including music, table-top displays, wall hangings, kites, flowers, etc. Use mental imagery exercises, audio tapes, flipcharts, flannel boards, videotapes, computers, physical objects, sketches, skits, colorful transparencies, and so on. Use color to draw attention and multiple delivery systems to add variety and interest. (Note: use the red color to point out or emphasize “the” most important…). Create hands-on learning experiences. Use examples, nonexamples, analogies, metaphors, contrasts, comparisons, and imagery. Use frequent previews and reviews. Elaborate on the content. Restate in greater detail or in different ways (pictorially, verbally, written, etc.). Introduce new concepts at the beginning and go over in detail later. Determine and accommodate different learning styles, speeds, and needs. Develop outlines or job aids to reinforce principles and concepts learned. Provide study guides, audiotapes of class material, board games, etc. for post-class follow-up. Use early and frequent self-assessments. Use post-class follow-up and support, such as a buddy system, meetings, newsletter, in-person discussions.
MAKE IT INTERACTIVE AND COLLABORATIVE
KEEP IT SIMPLE APPEAL TO THE SENSES
PROMOTE UNDERSTANDING
PROMOTE REINFORCEMENT
SL316XCh03Frame Page 47 Monday, September 30, 2002 8:15 PM
Design of Instruction
47
TABLE 3.4 (continued) Learning principles Principle
Description
PROMOTE ACCEPTANCE
Connect instruction to learners’ personal or professional goals, interests, present job, or experiences. Combine new material with learners’ current knowledge base. Stress learners’ ability to be successful. Eliminate or reduce any known fears. Give learners choices regarding pace, activities, etc., if possible. Use precourse packets consisting of pamphlets, booklets, audio tape, videotape, computer program, book, etc. that describe the program, emphasize learner benefits, include testimonials, create positive visual images of the program, etc. (Specifically, for Six Sigma this will be an opportunity to provide a job aid with the most frequent statistics used and the appropriate formulas.) Provide numerous opportunities for learners to practice what they learned, such as exercises and other activities. Provide remediation opportunities. Ask learners to describe, out loud or to each other, what they learned.
PROMOTE PRACTICE
will make your instructional product interactive and interesting and will increase the chances that your learners will learn and use what they have learned. When developing specifications for instructional elements, consider the learning principles outlined in Table 3.4. 5) Plan for remaining ISD phases: plan for the remaining phases of instruction, for ways to plan and implement design of job aids, development of materials, delivery of materials, evaluation: pilot testing, on-the-job application, and evaluation: post-instruction. Headings in each phase show which steps are part of “planning” and which are part of “implementation.” After you have completed design of instruction, evaluate the quality of your efforts by using the formative evaluation checklist as in Table 3.5.
SL316XCh03Frame Page 48 Monday, September 30, 2002 8:15 PM
48
Six Sigma and Beyond: The Implementation Process
TABLE 3.5 Design of formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your design of instruction needs improvement!
Y
N
1. Do you know the difference between the need for a job aid and the need for memorization? 2. Is instruction (vs. a job aid) appropriate for your learners’ needs? 3. Have you assembled information from your front-end and task analyses, such as sequenced instructional objectives, task steps, and audience characteristics, for all of your audiences? 4. Have you used this information to verify your audience needs? 5. Did you perform a product survey to determine whether suitable instruction is already on the market? 6. Have you developed a content outline based on the sequenced objectives and task steps from the task analysis? 7. Have you chosen the appropriate instructional methods given learning principles, instructional objectives, audience, and resources and constraints. 8. Have you chosen appropriate media given your learning principles, instructional objectives, audience, and resources and constraints? 9. Have you planned for instructional elements? 10. Have you planned for development of materials? 11. Have you planned for delivery of materials? 12. Have you planned for pilot testing? 13. Have you planned for on-the-job application of your instruction? 14. Have you planned for post-instructional evaluation? 15. Do you have an instructional plan for all of your audiences?
REFERENCES Hannum, W. and Hansen, C. (1989). Instructional Systems Development in Large Organizations. Englewood Cliffs, NJ: Educational Technology Publications.
SELECTED BIBLIOGRAPHY Briggs, L. J. and Wagner, W. W. (1981). Handbook of Procedures for the Design of Instruction. Englewood Cliffs, NJ: Educational Technology. Dick, W. and Carey, L. (1986). The Systematic Design of Instruction. 2nd ed. Glenview, IL: Scott, Foresman, & Co.
SL316XCh03Frame Page 49 Monday, September 30, 2002 8:15 PM
Design of Instruction
49
Gagne, R. (1985). The Conditions of Learning. 4th ed. New York: Holt, Rinehart, & Winston. Gagne, R. M. (Ed.) (1987). Instructional Technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum Associates, Publishers. Gill, M. J. and Meier, D. (March 1989). Accelerated learning takes off. Training and Development Journal, pp. 63–65. Knirk, F. G. and Gustafson, K. L. (1986). Instructional Technology. New York: Holt, Rinehart, and Winston. Mallory, W. J. (1987). Technical Skills: American Society for Training and Development’s Training & Development Handbook. New York, NY: McGraw Hill.
SL316XCh03Frame Page 50 Monday, September 30, 2002 8:15 PM
SL316XCh04Frame Page 51 Monday, September 30, 2002 8:14 PM
4
Development of Material and Evaluation
The success of development of materials depends on the following things: • Using the design plans created in phases 4 and 5 • Creating a development plan with the various development principles outlined in this phase • Implementing the development plan Unfortunately, design plans often do not exist, and when they do, they are frequently overlooked. The purpose of this phase is to dispel the notion that design plans are expendable. Just as a builder needs to follow a blueprint to achieve desired results, so does the ISD expert. Development of materials requires you to follow design plans previously outlined.
STEPS IN DEVELOPMENT OF MATERIALS PLANNING Preparation: before you embark in this phase of instructional design, make sure you have completed the following steps before beginning the development of materials phase, do a product survey; that is, determine whether a suitable product is already on the market (see phase 3). If there is, you will not need to develop a new product. However, you may need to customize or alter the existing product. In such cases, create all design plans for customizing before beginning to develop materials. Gather your design plans from phases 4 and 5. Review the design plans and content outlines created in phases 4 and 5. Check to make sure each plan matches prior front-end and task analysis results. 1) Plan for development: before beginning to develop your instructional product, you need to make some development decisions. Table 4.1 outlines various development principles. For example, using the plans created in previous phases, you will now decide on specific placement of illustrations, font type and size, page layout, etc. Development plans should include brief samples of how your finished product will look, along with any written specifications. For example, provide a sample of finished text, video, audiotape, examples, scripts, illustrations, etc. The purpose of creating a brief sample is not to develop an entire, finished product but only small parts. Stakeholders can review the brief samples for making
51
SL316XCh04Frame Page 52 Monday, September 30, 2002 8:14 PM
52
Six Sigma and Beyond: The Implementation Process
TABLE 4.1 Development principles MEDIUM
DEVELOPMENT PRINCIPLE
PRINT
Place illustrations close to referenced text. Label and caption all illustrations, etc. Keep “cues” (boldface, etc.) to 10% or less of text. Place critical information either first or last in sentences or lists. Use color coding for easy access. Write procedure name at top of each page. Indent, bullet, and number steps and substeps. Use three to four sentences per paragraph. Use the same vertical and horizontal spacing throughout. Use lots of blank space. Keep visual short and simple and text large and legible, giving details on separate handout. Use no more than eight lines per visual, and eight words per line. Use short titles, borders, and white space. Use the same fonts throughout, except for titles. Integrate graphics and color. Use short pauses, change volume, pitch, pace to make key words or phrases stand out, or to maintain attention. Use short phrases and limit unwanted sounds. Make sure music does not compete or distract. Make sure narration is clear and can be heard. Refer to audio and visual sections. Divide information into small parts instead of having a full day’s lesson plan in one session. Use neutral fashion and decor. Use bold video graphics for visibility. Program “easy access” into each lesson. Use boxes, color, and highlights to direct attention. Allow learner control of pacing. Allow adequate learner response time. Limit amount of text on screen. Present one idea per screen, one or two sentences long.
VISUAL AIDS
AUDIO
AUDIO/VISUAL
COMPUTER RELATED
acceptance decisions. And those developing the product will have something more to follow than just a list of written specifications. (NOTE: Design and development decisions often overlap. For example, you may have already made some development decisions during phases 4 and 5. They would have been included in your design plans.)
IMPLEMENTATION 2) Obtain format approval: after developing brief samples, but before beginning to develop rough drafts, obtain format approval from stakeholders.
SL316XCh04Frame Page 53 Monday, September 30, 2002 8:14 PM
Development of Material and Evaluation
TABLE 4.2 Example of rough draft of text – changing a tire (terminal objective) 1. Make sure your car will not move or roll. If you have an automatic transaxle, put the gearshift in Park. If you have a manual transaxle, put the gearshift in First. Set the parking brake and block the wheel that is diagonally opposite the tire that you are changing. (Warning: when one front wheel is lifted off the ground, neither the automatic transaxle P (Park) position nor the manual transaxle 1 (First) position will prevent the vehicle from moving and possibly slipping off the jack, even if those positions are properly engaged. To prevent the car from moving while changing a tire, always set the parking brake fully and always block [both directions] the wheel that is diagonally opposite the wheel being changed.) 2. Get out the spare tire and jack if you haven’t already done so. The jack is located in its own storage compartment on the right side of the trunk. 3. Remove the wheel covers or ornaments with the tapered end of a wheel nut wrench. Insert the handle of the wrench and twist it against the inner wheel cover flange. 4. Loosen the wheel lug nuts by pulling up on the handle of the wrench one half turn counterclockwise. Do not remove the wheel lug nuts until you raise the tire off the ground. For information about removing antitheft lug nuts, see later chapter sections. 5. Find the jack notch next to the door of the tire that you are changing. Put the jack in the notch and turn the handle of the jack clockwise until the wheel is completely off the ground.
Rough-draft development is usually a time-consuming and costly venture. You can save time and money by receiving format approval first and then proceeding with the rough draft. It is similar to an architect developing sketches, from blueprints, for his client. It is a lot easier for the client to understand the sketch, as opposed to visualizing a finished product from blueprints. And after seeing the end product, the client is in a much better position to make changes before beginning costly development. In ISD, format approval is similar to an architect gaining client approval. Format approval requires developing a brief sample of the finished product first. The brief sample is then given to stakeholders for their review and acceptance. 3) Create rough drafts: after receiving format approval (or revising the format until approval is received), expand your brief samples into complete rough drafts. Rough drafts should mirror the finished product. Once rough drafts are developed, you will be in a position to review and revise prior to creating the finished product. Some features may be too costly to produce in rough form. When this occurs, use prototypes to show what is intended. The particular type of rough draft you create will vary according to your medium. For example, when using print, your form of rough draft will end up as a written draft of any text materials. Table 6.4 shows assorted forms of rough drafts based on different media. Tables 3.1 and 4.2 show how a content outline (formed in phase 4) turns into a rough draft of text. 4) Review and revise rough drafts: after completing rough drafts, determine technical and editorial accuracy. You need to make sure the rough drafts meet all accuracy requirements before spending any time or money on final product development. For example, you may find flaws in the rough draft that would be too costly to fix after development. Use a sample of
53
SL316XCh04Frame Page 54 Monday, September 30, 2002 8:14 PM
54
Six Sigma and Beyond: The Implementation Process
TABLE 4.3 Rough draft evaluation form Product being evaluated
A = acceptable; NA = not acceptable
1. Can you reach your FEA goals given your content? MODIFICATIONS NEEDED: 2. Does content match objectives and audience? MODIFICATIONS NEEDED: 3. Does content match content outline? MODIFICATIONS NEEDED: 4. Is content sequenced logically? MODIFICATIONS NEEDED: 5. Is content technically accurate? MODIFICATIONS NEEDED: 6. Is content clear and concise? MODIFICATIONS NEEDED: 7. Are instructional elements (examples, illustrations, etc.) properly placed and technically accurate? MODIFICATIONS NEEDED: 8. Do instructional elements match and clarify content? MODIFICATIONS NEEDED: 9. Is remediation provided and acceptable? MODIFICATIONS NEEDED: 10. Are instructions clear? MODIFICATIONS NEEDED: 11. Is audio/video clear, appropriate, and understandable? MODIFICATIONS NEEDED: 12. Is punctuation, grammar, etc. accurate? MODIFICATIONS NEEDED:
stakeholders, sponsors, subject matter experts, and potential customers to assist in rough draft review. Ask for input using a rough draft evaluation form such as the one in Table 4.3. Begin with one-on-one reviews and progress to a small group for purposes of gaining broader reactions. Set review schedules and feedback deadlines to keep the project on track. The review process may repeat itself numerous times as revisions are made and reevaluated. It is best to specify in advance how many draft revisions will be allowed to prevent endless reviews and revisions. Remember, each revision is costly. You may find some reviewers will emphasize form over content. It is possible to attain quality levels in both areas through the use of development principles. A project manager can act to help achieve such a balance. 5) Produce final materials: in this step, the instruction or job aid becomes a reality. The rough drafts, with all their agreed-upon revisions, will become ready for pilot testing. Unless you have specialized expertise in the medium you have chosen, it is advisable to obtain professional assistance during final development (e.g., see a video producer, graphic artist, typesetter, etc.). Professionals will be able to supply the most effective techniques and technology to achieve your desired objectives.
SL316XCh04Frame Page 55 Monday, September 30, 2002 8:14 PM
Development of Material and Evaluation
55
TABLE 4.4 Development of materials — formative evaluation checklist ASK YOURSELF THESE QUFSTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your development of materials phase needs improvement!
Y
N
1. Have you done a product survey to determine if a suitable product already exists? 2. If so, does the product need customizing? (If yes, go to design of instruction/design of job aids.) 3. Have you gathered all your design plans for each audience from phases 4 and 5? 4. Have you obtained format approval for your brief samples? 5. Have you considered development principles when creating samples? 6. Have you developed your rough drafts, incorporating development principles? 7. Has a review team evaluated the rough drafts for technical and editorial accuracy? 8. Have all necessary revisions been made to rough drafts? 9. Are rough drafts ready for final production?
The quality of your final product will reflect the degree of careful planning (or lack thereof!) that has gone into all earlier stages of analysis and design. The time and resources invested earlier should pay off in this development stage. After you have completed development of materials, evaluate the quality of your efforts by using the formative checklist (see Table 4.4).
EVALUATION: PILOT TESTING Up to this point, all the steps in the ISD model have been related to planning and development. You began with analysis procedures to understand the problem, possible solutions, the audience, etc. You may have selected an off-the-shelf product if your survey indicated an appropriate one was available. Or, you may have designed and developed a new product to fit your needs. Now, you need to begin evaluating the product or products you created or purchased. Using pilot testing, you will evaluate how well earlier ISD steps were completed. (NOTE: pilot testing is similar to the verification process required in team oriented problem-solving. It is also similar to the validation process in process improvement.) Be aware that pilot testing is only one phase of the ISD evaluation process. The first informal evaluation began by using formative evaluation checklists provided at the end of each phase. These checklists monitor the quality of your in-process efforts. In contrast, pilot testing looks for early indications of problems after a sample audience has received the instruction or job aids. Pilot testing provides an “early warning system.” It, too, is considered a formative or “in-process” evaluation. Its purpose, however, is to find problems after delivery to a sample audience. Revisions are based on improving the product before delivery to the total target population.
SL316XCh04Frame Page 56 Monday, September 30, 2002 8:14 PM
56
Six Sigma and Beyond: The Implementation Process
Remember, throughout the pilot testing phase, the main focus is on the learner. The goal with any instructional product is that the learner acquire and practice new behaviors. If this does not occur, the program or the organizational environment (and not the learner) has failed in some way. Unless you pilot your products, however, you will not know if and how they work. For example, you may end up delivering the product for months, at great costs, only to find goals were never reached. Eliminate such risks from the beginning. Take a “proactive approach” and support all efforts to make a quality product using pilot testing. Because pilot testing is part of a larger evaluation process, you need to understand something about the total evaluation process. The following section provides a brief overview of evaluation levels, before proceeding to an in-depth explanation of the steps in pilot testing. Evaluation is the only way to find out whether earlier analysis, design, and development phases have succeeded in meeting your objectives. The following evaluation model is based largely on the work of Donald L. Kirkpatrick (1967). It consists of a process of measuring four outcomes of instruction (referred to as Levels 1 through 4): • Level 1 Reaction (How did learners feel about the instruction/job aid?) This information generally takes the form of post-instruction questionnaires or interviews. Participants report their impressions of instructor effectiveness, curriculum, materials, facilities, and course content. • Level 2 Learning (What facts, techniques, skills, or attitudes did learners understand and retain?) This is generally assessed with pre- and post-assessments, testing either learning or performance gains. • Level 3 Behavior (Did the instructional product change learners’ behavior in a way that impacts on-the-job performance?) This requires measurement of how effectively the skills, etc. have been transferred back to the work environment. • Level 4 Organizational changes (Did the program have an impact beyond the individual learner?) This measures whether instruction/job aids have been effective in achieving the kind of organizational changes often intended (e.g., improving morale and teamwork, reducing turnover). Since the four steps are listed in order of increasing complexity and cost, it is not surprising that the frequency of use is highest with level 1. Unfortunately, level 4 is rarely evaluated. However, it is important to recognize that there is little relationship between how learners “feel” about a program and what they have learned — or, more importantly, what they will do on the job because of it. Therefore, even though it is often difficult and frustrating, it is critically important to consider and address all four evaluation levels in some manner. This should be done early in the planning process to facilitate later assessment. The first two evaluation levels must be addressed before you can measure levels 3 and 4. Learning must occur before it can be used on the job. Pilot testing focuses on levels 1 and 2: audience reactions and knowledge gained. Its purpose is to test
SL316XCh04Frame Page 57 Monday, September 30, 2002 8:14 PM
Development of Material and Evaluation
57
the instruction or job aid on a small, representative audience. The rest of this phase explains the steps for both planning and implementing a pilot test.
STEPS IN PILOT TESTING PLANNING Preparation: use the following preparation suggestions when you plan for pilot testing: • Assemble front-end and task analysis information. You need to be familiar with your sequenced objectives, audience characteristics, etc. Knowledge or performance assessments should have been written during task analysis. Use these assessment instruments in your pilot test. • Assemble product survey information. Even if a suitable off-the-shelf product has been found, the product will still need pilot testing. Find out whether the product has been tested. If so, what were the testing procedures and results? • Review your design plans from phases 4 and 5. Refamiliarize yourself with how the product or products should be delivered. There are three fundamental stages in pilot testing: 1) select pilot test sample group, 2) assess baseline levels of knowledge or skill, and 3) plan post-delivery assessments (levels 1 and 2). 1) Select pilot test sample group: the objective of this step is to ensure that your pilot test will be conducted with an audience representative of your total population. For example, if instruction is geared to mid-level managers in finance, the pilot audience should include mid-level managers in finance. This increases the likelihood that pilot test results will mirror those expected from the target population. Ideally, the pilot audience should be randomly drawn from your total population to avoid biasing the sample in any way. If this is not possible, choose an audience that represents a mix of your audience’s characteristics. For example, in your sample include an equal mix of • Low, average, and high achievement levels • Male and female • Experienced and inexperienced • Young and mature • Motivated and unmotivated 2) Assess baseline levels of knowledge or skill: in order to assess knowledge gain it is necessary to know the learners’ baseline knowledge and skill levels. The traditional way of making this assessment has been through the use of “pretests,” administered prior to instruction. Unfortunately, preassessments have remained largely an ideal rather than a reality. Preassessments can be time-consuming to develop and
SL316XCh04Frame Page 58 Monday, September 30, 2002 8:14 PM
58
Six Sigma and Beyond: The Implementation Process
administer. Additionally, learners tend to be threatened by tests in general. There are, however, ways to overcome such obstacles. For instance, additional development time is not required when assessment instruments have already been developed in task analysis. These assessments or “tests” also serve as a way of clarifying instructional objectives. Additionally, “testing anxiety” can be overcome by removing the major reasons for its existence. Both pre- and post-instruction assessments can be coded with a number known only to the learner. For example, the learners can use the last four digits of their Social Security numbers. Using this method, results will not be linked to identifiable learners. Another method is for learners to score their own instruments, so the process becomes more of a learning experience rather than a test. Using the terms “pre- and post-assessments,” instead of “tests,” can also help eliminate testing anxiety. However, if your audience demonstrates a strong resentment toward assessments, you may need a different approach. One direction worth pursuing is to integrate instruction more closely with the work environment. Supervisors or other team members are often in a position to know what is needed and how well it is applied afterwards. Involving these individuals may provide you with information about the learners’ baseline knowledge or skills. 3) Plan post-delivery assessment (levels 1 and 2): recall that the purpose of pilot testing is to measure audience reactions and knowledge gained. Gather this information using questionnaires, tests, interviews, or observations. Your choice of evaluation tools should rest on which approaches are most reliable and valid. For example, unless you develop systematic observation procedures, your observations may be biased and invalid. Consider also resources (time and money) and other practical constraints of doing research in a “field” setting. What is most effective in a laboratory is often not feasible in organizations. You also may need to train evaluators. They need knowledge and skill in using observation methods, administering a questionnaire, conducting an interview, etc. If the evaluator does not measure outcomes properly, you will not know true program results. Following is an overview of things to consider when measuring learner reactions and knowledge gains. Assessing Audience reactions to instruction: this usually includes gathering participant self-reports using questionnaires or interviews. The purpose is to find areas that were bothersome to the audience and had the potential to interfere with learning. If audience reactions reflect any difficulties in learning or resistance to the instructional material, your product may need changing. (However, some negative audience reactions may reflect problems in the organization and not problems with the product.) Examine the comments and decide how to change your product.
SL316XCh04Frame Page 59 Monday, September 30, 2002 8:14 PM
Development of Material and Evaluation
Assessing audience reactions to job aids: this follows identical procedures as for assessing reactions to instruction, except the questions may vary slightly. For example, questionnaires or interviews might include these questions: • Did the aid help in performing the job? • Did the aid make your job easier? • Did the aid help to solve any on-the-job problems? • Was the job aid confusing? • What improvements could be made to the job aid? • Did the aid include enough information? • Do you have any other reactions or comments? Assessing learning gains: traditionally, this has been done by comparison of preassessment scores to post-assessment scores. Assessments of learning gains should have been prepared during task analysis. Give any preassessments before the program, with post-assessments immediately following. Measure the difference between pre- and postscores. Were there any general improvements? If not, improve your product by making changes to the design, development, or delivery. (A simple t-test may do the job.) For example, perhaps too much information was covered in too short a time, readability level was too high, or method of instruction was inappropriate. The true benefit of pilot testing is being able to know what needs improvement and making those improvements. Use post-assessment scores also to certify extent of knowledge gained, using a “90/90” measure. This means requiring that 90% of the audience learn 90% of the material upon completion of the instruction. If this does not occur, your product may need modification. Modification may be required in the design, development, or delivery phases. Or, perhaps the learner is in need of individualized remediation. Assessing performance gains: this requires evaluation of “hands-on” performance. For example, evaluating someone’s skill in driving a truck requires seeing the person drive, as opposed to just giving her a written assessment. If hands-on performance is required, consider using an observation technique along with the 90/90 measure. Ninety percent of the people should perform the task successfully 90% of the time. If this doesn’t happen, consider making necessary changes.
IMPLEMENTATION The following step will help you implement pilot testing: 4) Implement planning steps: once you have selected a representative audience and assessed baseline knowledge or skill, deliver the product. The purpose of pilot testing is to measure learner reactions and knowledge gained. If the pilot audience is similar to your target audience, and the
59
SL316XCh04Frame Page 60 Monday, September 30, 2002 8:14 PM
60
Six Sigma and Beyond: The Implementation Process
TABLE 4.5 Evaluation: pilot testing — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your pilot test needs improvement! Y
N
1. Are you familiar with the four levels of evaluation? 2. Do you understand the purpose of pilot testing? 3. Have you assembled information from your front-end and task analyses such as sequenced objectives, audience characteristics, etc. (see phases 2 and 3)? 4. Have you gathered assessment instruction developed in task analysis? 5. Have you refamiliarized yourself with your design plans from phases 4 and 5? 6. Have you planned for a representative, sample group for the pilot? 7. Have you planned for baseline assessments of knowledge or skill using the assessment instruments developed in task analysis? 8. Have you planned for how to measure levels 1 and 2 after the product is delivered to your sample audience? 9. Has the product been delivered according to the delivery specifications outlined in phase 8? 10. Have you measured for levels 1 and 2, audience reactions and learning? 11. Have you modified the product, where feasible, according to results from your pilot test?
instruction achieves its objectives, you can assume it will be similarly effective with the entire target audience. Remember to deliver the instruction or job aids according to delivery plans outlined in phase 8. For example, if delivery is individualized, deliver the instruction in that way. In addition, make the audience aware they are participating in a pilot. Let them know you need and want their feedback. (The next step outlines more fully the type of feedback you may want.) Consider also the need to train those delivering the instructional product. If the product is not delivered properly in the pilot test, your results will not be valid. You will not know whether the product or the delivery method is producing negative results. Using the post-delivery assessment procedures outlined in step 3, measure for levels 1 and 2. How has the audience reacted to your program or job aid? Has learning occurred? What modifications do you need to make? Make any necessary modifications and proceed to phase 8. After you have completed pilot testing, evaluate the quality of your efforts by using the formative evaluation checklist shown in Table 4.5.
SL316XCh04Frame Page 61 Monday, September 30, 2002 8:14 PM
Development of Material and Evaluation
61
REFERENCES Kirkpatrick, D. L. (1967). “Evaluation of Training.” In Craig, R. (Ed.), Training and Development Handbook, American Society of Training and Development, 1967.
SELECTED BIBLIOGRAPHY Baker, E. L. (1974). “Formative Evaluation of Instruction.” In Popham, W. J. (Ed.), Evaluation in Education. Berkeley, CA: McCutchan Publication Corporation. Baker, E. L. and Atkin, M. C. (1973). “Formative Evaluation of Instructional Development.” AV Communication Review, 2, 1, 1973, pp. 389–418. Briggs, L. J. and Wager, W. W. (1981). Handbook of Procedures for the Design of Instruction (2nd ed.). Englewood Cliffs, NJ: Educational Technology Publications. Converse, J. M. and Presser, S. (1987). Survey Questions: Handcrafting the Standardized Questionnaire. Thousand Oaks, CA: Sage Publications. Deshler, D. (Ed.) (1984). Evaluation for Program Improvement. San Francisco: Jossey-Bass. Dick, W. (1977). “Formative Evaluation.” In Briggs, L. J. Instructional Design: Principles and Applications. Englewood Cliffs, NJ: Educational Technology Publications. Dick, W. (1980). Formative Evaluation in Instructional Development. Journal of Instructional Development, 3, pp. 3–6. Dick, W. and Carey, L. (1986). The Systematic Design of Instruction. 2nd ed. Glenview, IL: Scott, Foresman, and Co. Fink, A. and Kosecoff, J. (1985). How to Conduct Surveys — A Step by Step Guide. Thousand Oaks, CA: Sage Publications. Gagne, R. M. (Ed.) (1987). Instructional Technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum Associates. Herman, J. L. (Ed.) (1987). Program Evaluation Kit. 2nd ed. Thousand Oaks, CA: Sage Publications. Phillips, J. J. (1983). Handbook of Training Evaluation and Measurement Methods. Houston: Gulf Publishing Company. Reigeluth, C. M. (Ed.) (1983). Instructional-Design Theories and Models: An Overview of Their Current Status. Hillsdale, NJ: Lawrence Erlbaum Associates. Ribler, R. I. (1983). Training Development Guide. Reston, VA: Reston Publishing Co. Richey, R. (1986). The Theoretical and Conceptual Bases of Instructional Design. New York: Nichols Publishing. Sudman, S. and Bradburn, N. M. (1985). Asking Questions: A Practical Guide to Questionnaire Design. San Francisco: Jossey-Bass.
SL316XCh04Frame Page 62 Monday, September 30, 2002 8:14 PM
SL316XCh05Frame Page 63 Monday, September 30, 2002 8:13 PM
5
Delivery of Material and Evaluation
At some time, your finished instructional products must reach your learner audience. Delivery of materials is the ISD phase in which you decide how best to attain this goal. You’ll find some delivery decisions have already been made during design of instruction and design of job aids. For example, you may have chosen to deliver instruction through a lecture method, using print and visuals as enhancers. There are additional delivery issues, however, that you also need to plan for and implement. This phase discusses these additional issues. Remember, delivery of materials can help or hinder acceptance of your instructional product. The audience will decide early if the program or job aid is of any interest or value, based on how it is delivered as well as its content.
STEPS IN DELIVERY OF MATERIALS PLANNING Preparation Use the following preparation suggestions when you plan for delivery of materials: • Review audience characteristics identified in the task analysis. Delivery success is related to the extent you have considered audience characteristics. For example, knowing your audience’s reading and math skill levels, prior learning experiences, etc. will help in determining the best way to deliver your instruction or job aid. • Review your design plans for previous phases. Some delivery decisions have already been made. Check your design plans for delivery requirements you now need to consider. For optimum results the following three steps will help you plan for delivery of materials: 1) analyze the delivery environment, 2) plan for management support, and 3) plan for audience acceptance. 1) Analyze the delivery environment: in task analysis, you were asked to analyze your learner audience. The more you found out about the audience, the better able you were to meet their needs. Now, for the same reasons, you are asked to analyze the environment where the instructional products will be delivered. The more you know about the delivery environment, the better able you
63
SL316XCh05Frame Page 64 Monday, September 30, 2002 8:13 PM
64
Six Sigma and Beyond: The Implementation Process
will be to tailor your instruction accordingly. Ask yourself the following questions: • What is the environment in which the instruction or job aid will be delivered? Will your instruction be delivered in a classroom, office, laboratory, manufacturing plant, or the learner’s home? Assume in your FEA you found a need to improve employee skill levels. The solution included an instructional program. To successfully implement this solution, however, you must know something about the delivery environment. Where is your instruction going to be delivered? You may decide delivery should be within the employee’s home environment. You need to know this before you can develop a delivery plan. • What are the expected “patterns of use?” Is the instruction or job aid going to be used sporadically or on a schedule? During the day or evening hours? During what season or climate, day, week, month, or year, and for how long? You also need this information in order to plan for delivery facilities, equipment, etc. For example, instruction designed for a learner’s use at home (the pattern of use) will need to be easily transportable. Once you understand the physical delivery environment, you must also understand the social and psychological factors that can influence effective delivery of your program or job aid. 2) Plan for management support: the most effective and enjoyable learning experience is wasted if there is inadequate — or negative — reinforcement for using the instruction or job aid back on the job. You have invested time and energy to facilitate application when you designed and developed your products. Unfortunately, this work can be undermined by conditions in the workplace. You can have some influence on these conditions, though, by considering these factors as part of your delivery plan. How can you plan for management support? There are several ways to accomplish this. Make greater use of learners’ management and coworkers when assessing knowledge and skill levels. In this way, management becomes involved in the process from the beginning. In addition, simply planning for the “secondary audience” (see Task Analysis) addresses the management support issue. For example, hold “information sessions” for the learners’ management. In these sessions, management learns about the program’s objectives. They gain knowledge about why the program was instituted, why the learners need to attend, and why learners need to use it back on the job. These sessions also serve to resolve any qualms or organizational conflicts on the part of management. In this way, management becomes a part of the program. In many cases, just making sure that management is not “surprised” by the content of the program will assure increased support. A more difficult problem to address, however, is the existence of organizational systems that fail to reward — or that even penalize — learners for using what was learned. Because such systems may be firmly
SL316XCh05Frame Page 65 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
embedded in the organization’s culture, change may be difficult. In such cases, discussing such barriers during training allows learners to deal openly with expected difficulties and share possible solutions. This is particularly valuable when a wide range of participant backgrounds is represented in the group. Finally, as training is integrated more effectively with business strategies, such conflicts with “cultural realities” should decrease in frequency. ISD contributes to this goal by beginning with a front-end analysis that highlights what problems training can and cannot address from the onset. 3) Plan for audience acceptance: obviously, management support and effective design and development will increase audience acceptance. There are additional factors, however, specific to delivery that will contribute to greater audience acceptance. Consider these factors now when developing your delivery plan. Increasing audience acceptance of instruction: • Consider instructor characteristics. Learners are more receptive to an instructor with whom they can identify. This is more a matter of style than of demographics. The most important instructor characteristics are presentation skills, thorough knowledge of product content, enthusiasm, and empathy for the learners. “Train-the-trainer” preparation and instructor certification are ways to ensure effective instructor skills. In your pilot test, question the audience about the instructor’s delivery. Make sure each instructor is properly trained and placed. • Consider environmental characteristics. An effective learning environment is free from distractions. This helps the learner focus attention on the instructional content. For example, room lighting, temperature, seating, acoustics, accommodations, and even the lunch menu exert an influence on learner attention. Any equipment should facilitate, not distract, from learning. It should be in excellent operating condition. An additional factor is how easy it was for the learner to get there. This involves taking into consideration such things as starting time, clear directions, and availability and ease of transportation. Increasing audience acceptance of job aids: • The major factor in promoting audience ability to effectively use a job aid is effective design. Audience response can also be strongly influenced by delivery considerations. For example, if the job aid came in the mail without explanation, acceptance would be low, regardless of design. The following examples offer some effective ways to deliver job aids: • Job aids are given to learners by their supervisor or other instructor with an explanation of why the job aid is important (e.g., software manuals are distributed as well as summary sheets of important items).
65
SL316XCh05Frame Page 66 Monday, September 30, 2002 8:13 PM
66
Six Sigma and Beyond: The Implementation Process
• Job aids are given to new employees along with equipment they are expected to operate and instructions. For example, a Minitab manual is given to the learners; however; the Minitab software is already loaded in the computer. • Job aids are included in the package with equipment to be assembled, with easy-to-read, easy-to-follow instructions.
IMPLEMENTATION In order to prepare for implementation of delivery of materials, consider the following: • Review learning principles. While many of these techniques are used in the design of materials, some depend on the skills of the presenter. • Review instructional materials. Being familiar with the material is of critical importance when carrying out your delivery phase. • Review pilot test results. Use feedback regarding pacing of instruction, facilitator effectiveness, etc. to improve upon your original delivery techniques. There are two steps that will help you implement delivery of materials. They are: 4) finalize a delivery plan and 5) apply, evaluate, and revise your delivery plan. 4) Finalize a delivery plan: the delivery plan will result from completion of steps 1 through 3. The delivery plan form (see Table 5.1) may help you to develop your delivery plan. After analyzing the delivery environment and planning for management support and audience acceptance, you should have enough information to finalize a delivery plan. Detail specifics about when, where, and how to deliver each program or job aid. Include within your finalized delivery plan the following items: • Facility and sites. Include specifics about the optimal physical place for delivering the instruction or job aid. This includes any special requirements such as a room or building of a particular size, lighting, acoustics, etc. • Equipment. Include plans for items such as computers, blackboards, desks, tables, chairs, carrels, audiovisual equipment, projectors, screens, chalkboards, easels, calculators, etc. • Supplies. Include plans for items such as pencils, pens, paper, visualaid materials, workbooks, film, slides, projector, video, television, tests, exercises, handouts, etc. • Schedule. Include a detailed list of the proposed days, months, times, etc. for delivery of the instruction or job aid. • Instructors. Include the proposed number of facilitators needed to deliver the instruction or job aid, required prerequisite skills, education, certification, experience, etc. • Miscellaneous. Include items such as required security clearances, special transportation, parking, requirements, etc.
SL316XCh05Frame Page 67 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
67
TABLE 5.1 A typical delivery plan Items
Comments
Audience characteristics
Reading skill level Math skill level Prior learning experience Attitude toward learning Attitude toward subject Handicaps Average age Gender mix Cultural mix Other considerations Method Media Instructional elements
Design considerations
Delivery environment Patterns of use Management support plan Audience acceptance plan Facility and sites
Equipment Supplies Schedule Instructors Other
Desired location Room size requirements Lighting requirements Acoustic requirements Arrangement requirements Electrical requirements Handicap requirements Additional facility requirements Computers, blackboards, desks, tables, chairs, carrels, audiovisual equipment, projectors, chalkboards, easels, calculators, flip charts, etc. Pencils, pens, paper, visual aids, workbooks, film, slides, videos, televisions, tests, exercises, handouts, etc. Proposed days, months, times, etc. Number, prerequisite skills, education, certification, experience and so on Security clearances, special transportation, parking, etc.
5) Apply, evaluate, and revise your delivery plan: as you deliver your instruction or job aids to a wider audience, it is important to remain attentive to both audience reaction and the learning that is (or is not) occurring. Your pilot testing was based on a small, though representative, sample. It is important now to continue to test the initial conclusions of that research. You will do this in several ways: • Observe differences in audience reaction and participation during delivery of the instruction at each time and location.
SL316XCh05Frame Page 68 Monday, September 30, 2002 8:13 PM
68
Six Sigma and Beyond: The Implementation Process
TABLE 5.2 Delivery of materials — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your delivery phase needs improvement!
Y
N
1. Have you considered audience characteristics as identified in the task analysis? 2. Have you reviewed your design plans from phases 3 and 4 to ensure consistency with previous delivery decisions? 3. Have you analyzed the delivery environment? 4. Have you planned for ways to increase management support? 5. Have you planned for ways to increase audience acceptance? 6. Have you reviewed design principles for effective presentation techniques to be used? 7. Have you reviewed instructional materials and feedback from the pilot test regarding delivery? 8. Have you finalized your delivery plan considering facility and sites, equipment needed, supplies, schedules, instructors, etc.? 9. Have you applied and continued to evaluate your delivery plan, making necessary revisions?
• Solicit informal feedback from participants during the program and at breaks. • Continue to collect formal feedback (questionnaires, interviews, etc.) from participants. • Continue to assess the amount of learning occurring, either via preand post-assessments or by observation. If you gather information indicating alternative approaches are needed, experiment to see which new approaches are valid and with what audiences. Sometimes, you will find you may need to tailor specific examples, or even entire procedures, to various segments of your audience. A group of your manufacturing employees, for example, would respond to the use of all technical examples differently than a group of employees from Finance, Sales, Engineering, or Personnel. The objective is always to consider audience characteristics. Such ongoing assessment and revision are more difficult to do with job aids delivered separately from instruction. It will require special effort to follow up with users to gauge audience reaction and use. This feedback is critical, however, to assuring that the job aid is not sitting on a shelf unused — or discarded because of lack of user-friendliness! After you have completed delivery of materials, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.2).
SL316XCh05Frame Page 69 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
69
ON-THE-JOB APPLICATION What should you do after your instructional product is delivered to your audience? In most cases, nothing is done. Learners go through intensive, week-long programs and are usually left on their own to use or discard what they learned. This results in a waste of organizational resources. In most cases, instruction does not come about because “it’s a nice thing to do.” Instruction is purposeful. The goal is for learners to learn and use what they learn, with subsequent impact on the bottom line. But when instruction is not used, everyone loses. The learner wastes a lot of time and the organization a lot of money. So what can you do after your instructional product is delivered to your audience? Your goal should be to help the learner apply the instruction, or job aid, on the job. The process of transferring that knowledge to the job is called “on-the-job application.” On-the-job application refers to Kirkpatrick’s level 3, on-the-job behavior change. The success of on-the-job application rests on how well previous ISD phases were planned and executed. This includes using various learning and development principles (see phases 4, 5, 6, and 8). In addition, you need to know your secondary audience (see phase 2). Other techniques for increasing application are outlined in this phase. (NOTE: Since learners’ supervisors are in a pivotal position to influence application, their role is spelled out separately from that of instructional professionals. For added convenience, “Tips for Supervisors, MBBs, BBs: How to Help Your Employees Use the SIX SIGMA Methodology Education and Training on the Job” is placed at the end of this chapter. This section can be used as a “standalone” job aid for supervisors.)
STEPS IN ON-THE-JOB APPLICATION PLANNING Preparation • Review information from FEA and Task Analysis. Review FEA cost-benefit analysis information about customer and stakeholder attitudes. These attitudes can influence management and coworker support of instructional solutions. Phase 2 (task analysis) provides additional information about secondary audiences. Use this information to plan for support from these audiences. • Check design, development, and delivery plans (phases 4, 5, 6, 8) to ensure use of learning and development principles. The more closely the delivery environment simulates actual on-the-job conditions, the greater the chance of application. Also very important are provisions for feedback and remediation. There are two steps that will help you plan for on-the-job application. They are: 1) plan for application during design, development and delivery phases and 2) plan for secondary audience support.
SL316XCh05Frame Page 70 Monday, September 30, 2002 8:13 PM
70
Six Sigma and Beyond: The Implementation Process
1) Plan for application during design, development, and delivery phases: many of the recommendations in this step should be familiar. They were included in learning and development principles covered in earlier phases. They bear repeating, however, because they are so important to ensuring on-the-job application. Build in job relevance: a major way to ensure job relevance is to link instructional materials closely with resources available on the job. Do this by building instructional programs around job aids or existing tools, equipment, or written materials used on the job. Designing and using instructional materials that will not be used in the workplace will only confuse the learner (e.g., using an in-class WordPerfect text that is different than the on-the-job WordPerfect reference guide will cause problems. Statistical software is a major problem in this area). Also, make use of examples that relate very closely to what is familiar to learners on their jobs. Build in repetition, practice, feedback and remediation: application is facilitated when material is repeated enough to be firmly embedded in learners’ memories or captured in job aid reference materials. Too often, more material is covered than learners could possibly absorb. The result is little or nothing that is retained, much less applied to the job. Build in opportunities for practice and feedback: by providing plenty of active learning rather than lectures or discussions. Feedback can often be built into job aids by written instructions. For example, a job aid can describe how a machine should sound and function if correctly assembled, operated, etc. Finally, be sure to do enough informal observation or measurement of learner skills dues instruction. You want learners to have a chance to gain additional practice and assistance (remediation) in the areas in which they are having a difficult time. 2) Plan for secondary audience support: this planning should occur very early in the ISD process. In the task analysis phase, you analyzed important characteristics of both primary and secondary audiences. In front-end analysis, you made early assessments of expected management and coworker support. The preferred goal is to enhance primary audience learning by building in necessary secondary audience support. You can do this by planning and conducting preinstructional awareness sessions for supervisors and coworkers or by including them in the actual training for the primary audience. You can also provide tools and job aids that make it easier for the secondary audience to give learners assistance and feedback. Unfortunately, sometimes there are serious obstacles to application that you are unable to influence. In such cases, it probably makes more sense to adjust the training rather than try to change the work environment. For example, if an organization provides individual, rather than team, merit awards, instruction should be directed at fostering cooperation among workers in the context of individual rewards. In extreme cases, it may be preferable to reduce or eliminate an instructional program that is not supported by management or organizational systems rather than to waste valuable resources. Keep in mind that the role of instruction is
SL316XCh05Frame Page 71 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
71
TABLE 5.3 On-the-job application — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that on-the-job application needs improvement!
Y
N
1. Have you reviewed information from FEA and task analyses about secondary audiences? 2. Have you checked the design, development, and delivery plans to assure built-in opportunities for learners to practice new knowledge and skills under realistic conditions? 3. Have you linked instructional materials closely with resources available on the job? 4. Have you included enough repetition of material to ensure that it will be retained by learners long enough to apply it? 5. Have you built in plenty of opportunities for practice and feedback? 6. Have you included sufficient access to remediation during instruction? 7. Have you planned for secondary audience support, such as preinstructional awareness sessions and job aids to encourage their involvement? 8. Have you made adjustments to your instructional programs to deal with various obstacles to application?
to support strategic business goals, not to instigate cultural change. The latter requires much more extensive organizational intervention.
IMPLEMENTATION This phase is unique because implementation rests with primary and secondary audiences. Your careful attention to planning will help them succeed. After you have completed on-the-job application, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.3). Tips for master black belts, black belts, supervisors: How to help your support team and workers to use the Six Sigma education and training on the job. As a master black belt, black belt, or supervisor, you are in a pivotal position to facilitate use of what your support team and employees learn in the Six Sigma education and training methodology. Following are some suggestions about what you can do before and after instructional programs to ensure that you, the participant employee, and the company get a good return on the investment made in training.
BEFORE TRAINING Suggest relevant instruction to employees. Focus on training that relates to specific skills you feel could be enhanced. Suggestions can be made in the context of career
SL316XCh05Frame Page 72 Monday, September 30, 2002 8:13 PM
72
Six Sigma and Beyond: The Implementation Process
discussions or during coaching or appraisal sessions. Since most people are interested in increasing their value to the organization, it can be helpful to put your suggestions in these terms. Discuss employee-initiated training requests. The emphasis should be on ensuring a return on the company’s investment. Demonstrate your support of employees’ individual developmental needs by encouraging them to find tie-ins to their responsibilities. It is important not to reject requests for which you do not see an immediate application or to be overly stringent about job-relatedness. In doing this, you may squelch employees’ enthusiasm for instruction that could increase their versatility and long-term value to the company. Agree on instructional goals and application. Using Table 6.6 (Learner-Supervisor Preinstructional Agreement) or some other document, discuss and come to agreement on what the learner should be able to do differently after instruction. This should include some discussion of necessary resources and expected obstacles. The main focus, however, should be on some observable improvement in behavior (usually a skill or demonstrated knowledge) that should occur within a reasonable time frame after instruction. Written summaries of the agreement are preferable so as to avoid misunderstandings and lack of follow-up. Schedule instruction in a timely fashion. The important guideline here is to make sure employees will have access to necessary resources (e.g., computer excess, appropriate and applicable software manuals or other items) immediately after training. Otherwise, too much forgetting will occur, without opportunities for practice and feedback on how well they are doing.
AFTER TRAINING Assure opportunities for practice. Incorporate desired performance accomplishments into employees’ objectives to reinforce their importance. Make employees responsible for training other workgroup members in what they have teamed. Assure access to job aids or equipment needed to perform new behaviors. Provide feedback. Observe the new behaviors and provide helpful information about how well the employee is applying them to the job. Be sure to concentrate heavily on positive reinforcement, to help build learners’ self-confidence. Approach areas for improvement by building on what learners are doing well to get even better results. (Many times when you provide only positive feedback, the learner will ask you what they could be doing better!) Make your feedback as specific and timely as possible and don’t overload the learner by dealing with more than one issue at a time. Provide rewards and reinforcement. Often the most timely and valuable reward you can provide is recognizing that the employee is applying new skills to
SL316XCh05Frame Page 73 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
73
the job. Show appreciation for their interest in doing so. This can be as simple as a few words of encouragement. Give employees varied or more challenging assignments. This rewards them for acquiring new skills and provides more opportunities for demonstrating them. Remove obstacles. Make it clear that doing things differently from the “old way” will not work against employees applying new behaviors. Prevent jealousy or competition among coworkers by making sure everyone gets equal access to training. (If equal excess is not possible, explain the situation.) Also, have coworkers teach others what they have learned. Provide refresher. If possible, demonstrate the skills yourself. On-the-job role models of skills learned in training are very important to help learners apply new behaviors. Conduct follow-up sessions to review what has been learned, especially if the entire work group has participated in the training. Simply discussing with employees what they learned can serve as a refresher or even open the door to a coaching discussion. Follow up on preinstructional agreements. Are employees using what they have learned on the job? How have they dealt with obstacles? Have you provided needed resources? Revisit these issues periodically with employees until the new skills are being routinely used.
EVALUATION: POST-INSTRUCTION After months of designing, developing, and delivering the instructional products, do you know if your audience is using what they learned? Have the instructional programs impacted your organization? Has customer satisfaction increased, production increased, accidents declined, quality improved, or revenues multiplied as a direct result of the instruction or job aids? The purpose of post-instructional evaluation is to measure the extent to which learners are using the instruction/job aids on the job. Its purpose is also to measure how the products have impacted the organization. Recall from the pilot testing phase the 4 levels of evaluation. Pilot testing measures levels 1 and 2: learner reactions and how much learning has occurred. Pilot testing results are used to make improvements in the products. In contrast, post-instructional evaluation focuses on levels 3 and 4: on-the-job application of learning and organizational change. Post-instructional evaluation is performed months after the products have been delivered to the entire audience. It is a summative, or end-of-program, evaluation. Based on results obtained, you will decide whether to continue, modify, or eliminate the instructional program or job aids. (NOTE: On-the-job and organizational changes do not always occur immediately after instruction or introduction of job aids. It takes time for people to learn, practice, and master new ways of doing things. Therefore, you need to make sure learners have had time to use what they learned before evaluating. In general, measure change at least 3 months after the program.)
SL316XCh05Frame Page 74 Monday, September 30, 2002 8:13 PM
74
Six Sigma and Beyond: The Implementation Process
STEPS IN POST-INSTRUCTIONAL EVALUATION PLANNING Preparation: use the following preparation suggestions when you plan for post-instructional evaluation: • Review front-end and task analyses for decisions about desired changes. Desired individual and organizational changes should have been outlined in the FEA (described as problems and solutions). Instructional objectives (see task analysis) specify in more detail desired individual changes. • If you are using an existing product, review previous evaluation studies. Is there evidence that the program can change on-the-job behavior? Have organizational gains resulted from use? If not, ask the supplier to perform a post-instructional evaluation to validate the program’s worth. If the product already comes with claims to level 3 and 4 outcomes, you will want to confirm the results using your own population. • Review any informal evaluation results from on-the-job application. Informal evaluation results may serve as input for developing your evaluation instruments. For example, when informally evaluating levels 3 and 4 outcomes during application, you may discover questions leading to valuable data. You can use these questions in your post-evaluation surveys. The post-instructional evaluation training process follows a five-step process. The process is identified by the following steps: 1) decide how to measure desired changes, 2) decide how to collect data, 3) do cost-benefit analysis of evaluation, and 4) obtain necessary commitments. 1) Decide how to measure desired changes: during front-end and task analyses, decisions were made about desired individual and organizational changes. The FEA described organizational problems and solutions. The task analysis spelled out instructional objectives, detailing what individuals need to do differently on their jobs. The decision now is, how will you measure the desired changes? Some changes, like successfully changing a tire or using WordPerfect, may be fairly easy to measure. You will be able to see the learner change the tire or use WordPerfect. Others, like coaching or participative management skills, will be far more challenging. Linking the use of instruction to organizational changes will be even more difficult. Measuring change can be accomplished using either objective or subjective tools. Objective measurements come from sources such as systematic observations, records, or reports of factual information. In contrast, subjective data comes from sources such as questionnaires, interviews, or focus groups. Because subjective data consists of individual opinions and judgments, it is more open to bias. For this reason, it is preferable to rely on objective, verifiable data whenever possible.
SL316XCh05Frame Page 75 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
75
TABLE 5.4 Post-instructional data collection tools TYPE OF TOOL
Description
OBSERVER CHECKLIST
This tool requires an impartial observer to observe the learner on the job for use of instruction or job aid. The checklist requires decisions on who, where, and what behaviors need observation and what observation methods to use. This tool requires learners to self-evaluate their on-the-job use of instruction or job aids. Self-reports have the potential disadvantage of being biased, yet this method costs leas than an observer checklist. This tool requires that learners evaluate their supervisors’ practice, feedback, and support methods. This tool helps pinpoint other reasons for lack of application to the job. This tool requires that learners list any questions they have about on-the-job use of the instruction or job aids. Questions will show areas that may need further instruction before transfer can occur. This tool requires that learners comment on relevance of instruction to job or skills needed, conflict between instruction and the organization’s culture, policies, etc. to accuracy of material, system changes, etc. This can help ascertain why transfer is not occurring. This tool requires that learners and supervisors report on use of instruction or job aids. “Incidents” include events when instruction or job aids should have been used but were not. Such reports should decrease as transfer increases. This tool requires that learners’ subordinates report on behavior changes they see. These reports may contain biases or misperceptions not as apparent, for example, when using more objective measurement tools. This tool requires peers to comment on learner behavior changes they see occurring on-the-job after instruction. Such subjective reports may contain biases or misperceptions. This tool requests customers to evaluate learner behaviors. This information is gathered to determine if, through the eyes of the customer, on-the-job behaviors are consistent with instruction or job aid objectives. Assessing organizational changes requires looking at concrete results related to the instruction or job aids. “Concrete results” refers to evidence of reduced costs, improved quality, increased profits and production, etc.
SELF REPORT
SUPERVISORY REPORTS
RECORD OF QUESTIONS
COMMENT FORMS
CRITICAL INCIDENT REPORTS
SUBORDINATE REPORTS
PEER REPORTS
CUSTOMER REPORTS
CONCRETE RESULTS
Table 5.4 shows various types of data-collection tools. These tools can be used to measure on-the-job behavior changes or organizational changes. Table 5.5 shows an example of a self-evaluation measurement tool to measure on-the-job behavior change.
SL316XCh05Frame Page 76 Monday, September 30, 2002 8:13 PM
76
Six Sigma and Beyond: The Implementation Process
TABLE 5.5 Self-evaluation measurement tool POST INSTRUCTION EVALUATION
Instructional Objectives
I demonstrated this objective by doing:
Initial (may be initialed by the learner and the supervisor)
1. 2. 3. Areas for continuous improvement:
TABLE 5.6 Research design action plan Post instructional evaluation research design action plan Desired change to be measured Measurement tools Sample source Sample size Sample location Data collection time plan Data collection personnel Statistical analysis procedures Statistical analysis time plan Statistical analysis personnel
When deciding what tool to use, consider the costs involved. For example, using existing written records requires less time and money than developing surveys or performing observations. Even using existing records, however, requires clerical and analysis resources, permission to gain access to the data, etc. 2) Decide how to collect data: once you have chosen the measurement tools, develop a research design outlining which tool will be used, when, and in what way (i.e., through a survey, interview, direct observation, or with a group or on an individual basis). Decide who will receive your tool, how they will receive it, and how they will get the information back to you. Table 5.6 is an example of a format for summarizing the details of
SL316XCh05Frame Page 77 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
77
your research design. Professionals can assist you with important research design considerations such as the use of control groups, comparison of pre- and post-groups, statistical methods, etc. 3) Do cost-benefit analysis of evaluation: once you have decided how to measure the desired changes and collect the data, estimate how much your evaluation will cost. The extent of resources needed may determine whether the evaluation is feasible or worthwhile. Assign dollar figures to the time, staff, expertise, materials, and overhead required to implement each part of the evaluation. You may need help from your accounting department in this area. Consider the estimated costs in relation to expected benefits. Based on the results of this analysis, is the evaluation justified? Important factors to consider include the scope and urgency of the program. Instructional programs that address relatively minor business problems may not be worth the investment of such an evaluation. On the other hand, ongoing programs that require huge resource commitments will merit the effort required for a formal evaluation study. 4) Obtain necessary commitments: after performing your cost-benefit analysis and deciding to proceed with evaluation, obtain the necessary commitments. Make a formal request to management to begin the evaluation. Outline all budget and time requirements and why the evaluation is necessary. Request permission for funds, access to necessary information, and a commitment to act on study results. (Do not waste organizational resources on training or evaluation without this important commitment.)
IMPLEMENTATION The following steps are part of the post-instructional evaluation implementation process: 5) develop measurement tools 6) collect and analyze data 7) report results and make necessary improvements. 5) Develop measurement tools: after obtaining management commitment, you are ready to put your plan into action. Develop the measurement tools outlined in your evaluation plan. This includes questionnaires, structured interview questions, or recording forms for the collection of existing records. You can begin developing measurement instruments even before instruction has been delivered. In fact, doing so may help clarify the aims of instruction and the most effective methods to achieve them. You may require professional assistance during this step to assure that measurement tools lend themselves to statistical analysis. While bar graphs and percentages are frequently used to report evaluation results, they do not really provide the level of precision necessary to make important decisions about whether to continue or modify the program. 6) Collect and analyze data: in this step, you will collect the evaluation data according to your research design plan (see Table 5.5). With professional
SL316XCh05Frame Page 78 Monday, September 30, 2002 8:13 PM
78
Six Sigma and Beyond: The Implementation Process
assistance in interpreting the data you gather, you will have the information necessary to know whether your program has achieved its objectives. If desired changes did not occur, you will probably want to know why. Below are some steps to follow in making this determination: Re-assess on the job application procedures: pitfalls are common with on-the-job application. Often, the organization does not support post-instructional change. When this occurs, the learner cannot be expected to use what was learned, regardless of how well the product was designed, developed, or delivered. Phase 9 discusses on-the job application problems in detail. If you find learners are supported on the job, yet change is not occurring, proceed to the next step. Re-assess extent of learning (level 2): re-administer post-tests assessing retention of instructional material. If the audience does not remember the material, there is no possibility of applying it to the job. If assessments show loss of learning, provide remediation in the areas indicated and reassess. If learning is still not demonstrated, reassess the product’s design, development, and delivery. If assessment of level 2 indicates the audience has retained the material and learners are supported back on the job, proceed to the next step. Reassess front end and task analyses: your instruction or job aids may be geared toward the wrong problem or audience! You may need to perform a new FEA or TA. What exactly is the problem? What is the appropriate solution? What are the specific tasks needing instruction? Who is your target audience? 7) Report results and make necessary improvements: regardless of what results you have obtained, your post-instructional evaluation should be summarized in a report. Work with a professional to translate technical and statistical results into a business report format. Include the following information: • What program was evaluated, and why • How the evaluation was conducted, and by whom • Findings and implications for continuation or modification • Recommendations based on findings, budget, and time constraints • Plan of action based on recommendations As you proceed to implement any necessary revisions, continue to monitor the outcomes. The ISD process requires continuous improvement. Even if you reach a point with your instructional products where desired changes are occurring, you need to continue evaluation. New requirements, audiences, and specifications can change the focus of the current program. You will probably not conduct repeated formal evaluations. However, informal or formative evaluation should continue for the duration of the instructional program or job aid. After you have completed post-instructional evaluation, evaluate the quality of your efforts by using the formative evaluation checklist (see Table 5.7).
SL316XCh05Frame Page 79 Monday, September 30, 2002 8:13 PM
Delivery of Material and Evaluation
79
TABLE 5.7 Evaluation: post-instruction — formative evaluation checklist ASK YOURSELF THESE QUESTIONS: Each of the following questions is addressed under major headings in this phase. Any “NO” answer should serve as an alarm that your post-instructional evaluation needs improvement!
Y
N
1. Have you allowed sufficient time for the on-the-job application phase to occur before conducting a post-instructional evaluation? 2. Have you reviewed your FEA and task analysis for decisions about what individual and organizational changes are desired? 3. If using an existing product, have you reviewed available information on previous post-instructional evaluation studies? 4. Have you reviewed informal evaluation results from the on-the-job application phase for possible use in designing your measurement tools? 5. Have you chosen the appropriate measurement tools based on objectivity, accessibility, and cost? 6. Have you developed a research design for collecting data? 7. Have you estimated the cost of the evaluation? 8. Have you weighed the costs and benefits of conducting a post-instructional evaluation? 9. Have you obtained management commitment before beginning the evaluation? 10. Have you developed your measurement tools? 11. Have you collected your data, measuring levels 3 and 4? 12. Have you analyzed your results? 13. If desired changes have not occurred, have you determined why? 14. Have you reported your post-instructional evaluation results? 15. Have you made necessary improvements and continued to monitor the process?
REFERENCES Kirkpatrick, D. L. (1967). “Evaluation of Training.” In Craig, R. (Ed.), Training and Development Handbook, American Society of Training and Development.
SELECTED BIBLIOGRAPHY Briggs, L. J. and Wager, W. W. (1981). Handbook of Procedures for the Design of Instruction. 2nd ed. Englewood Cliffs, NJ: Educational Technology Publications. Knirk, F. G. and Gustafson, K. L. (1986) Instructional Technology. New York: Holt, Rinehart, & Winston.
SL316XCh05Frame Page 80 Monday, September 30, 2002 8:13 PM
80
Six Sigma and Beyond: The Implementation Process
Perry, S. B. and Robinson, E. J. (1987). Participative Techniques of Group Instruction. Princeton: Training House. Phillips, J. J. (1983). Handbook of Training Evaluation and Measurement Methods. Houston: Gulf Publishing. Renner, P. F. (1983). The Instructor’s Survival Kit: A Handbook for Teachers of Adults. Vancouver, Canada: Training Associates. Scientific Research Associates (1989). Instructor Training Curriculum: Delivery Skills. Chicago: Pergamon Press. Worthen, B. R. and Sanders, J. R. (1987). Educational Evaluation. White Plains, NY: Longman.
SL316XCh06Frame Page 81 Monday, September 30, 2002 8:13 PM
6
Contract Training
Quite often, many organizations do not have the ability to develop their own instructional material. When that happens, the development for the specific training gets to be “hired out.” In fact, our experience tell us that most companies have indeed hired out not only the development of the material but the delivery as well of all Six Sigma requirements. On these occasions, the organization must be vigilant so that their wishes and concerns as well as objectives are met. This is what this chapter focuses on. We will try to answer the question of what should be done when the training, including the delivery, is bought from a third-party vendor. We call this process “specification” because it links all the phases of instructional design in the process of buying. This approach, we hope, provides guidance on how to professionalize every aspect of education and training, from front-end analysis of the problem through evaluation of program results. Specifications are an outline of the deliverables that should be expected from any organization that buys training material — especially the Six Sigma methodology. If additional detail is required to implement these specifications, it should be obtained from the project manager. Each phase in these specifications may be elaborated on in more detailed descriptions that can define any aspect of your requirements. Here, we give some of the most essential requirements for success.
FRONT-END ANALYSIS Problem-solving front-end analysis: the supplier shall perform a front-end analysis that includes the following outcomes: • Description of procedures and sources used to collect information for the front-end analysis. • Description of desired and actual performance levels or problem/gap defined in business-unit terms (e.g., direct labor, inventory, absenteeism, etc.). • Description of possible causes of the problem based on technical factors (e.g., materials, equipment, environment), organizational factors (e.g., structure, rewards, feedback, compensation), and human factors (e.g., skill and knowledge levels). • Determination of potential solutions, with supporting data. • Determination of the best solution based on a business case including description of problem cost; cost of solution; how well the solution will 81
SL316XCh06Frame Page 82 Monday, September 30, 2002 8:13 PM
82
Six Sigma and Beyond: The Implementation Process
solve the problem; potential customer or stakeholder support; fit to organization’s culture, business objectives, and continuous improvement policy; potential implementation barriers and related costs; and potential return on investment. • Reporting of information to the organization including a description of how and why the FEA process began, identification of gaps, causes, operating consequences, personnel, jobs and costs involved, possible solutions, recommended solutions, how solutions will support continuous improvement, project scope and schedule, and appendices including back-up correspondence, budgets, data-gathering tools, etc. Planning front-end analysis: the supplier shall perform a planning front-end analysis that includes the following outcome: • Compliance with the organization’s process improvement procedures.
TASK ANALYSIS The supplier shall perform a task analysis that includes the following outcomes: • Analysis of the audiences receiving the instruction or job aid including primary and secondary audiences. Primary audiences include those going through the instruction or using the job aid. Secondary audiences include anyone whose support is necessary for successful performance by the primary audience, such as supervisors. Audience data, gathered from personnel records, surveys, etc., shall include information on demographics (e.g., age, gender, culture, etc.), capacity levels (e.g., intellect, cognitive style, physical development, etc.), competence levels (e.g., prior skills and training, experiential background, reading ability, languages spoken), and current skill and knowledge levels (relative to the current instructional program). • Determination of the terminal objectives. • Description of task steps (including all the sequenced major task steps, subtasks, sub-sub tasks, etc.) required to expertly reach the terminal objective. Include about each major task step the necessary information outlined in Table 2.5. Gather task step information from accomplished performers, administrative checklists, flowcharts, interviews, observation, surveys, etc. • Determination of a task hierarchy, based on description of task steps. • Description of instructional objectives for each major task step, subtask step (if needed), etc. Each instructional objective shall include a description of 1) the desired, observable task to be performed, 2) the standards by which the task accomplishment will be measured or evaluated for successful achievement, and 3) the conditions or circumstances under which the task will be performed.
SL316XCh06Frame Page 83 Monday, September 30, 2002 8:13 PM
Contract Training
83
• Classification of each instructional objective by storage medium (i.e., either by instruction or job aid). For example, some instructional objectives will not require a learner to attend an instructional program. Sometimes, all the learner needs is a job aid. • Development of assessment instruments, to be used for pre- and postassessments. Items on the assessment instrument must match the instructional objectives. Product survey: the supplier shall perform a product survey that includes the following outcomes: • Survey of existing instructional products, locating those with the potential of meeting requirements outlined in the corresponding front end and task analyses. • Elimination of unsuitable products, including those products with a poor reputation, unreasonable prices, and inappropriate delivery media. • Evaluation of remaining suitable products according to Table 6.1. • A cost-benefit analysis for modifying an existing product or products, including description of all necessary changes, and comparison of costs to modify (by the company or by a supplier) with those to develop a customized program (by the company or by a supplier).
DESIGN OF INSTRUCTION The supplier shall design instruction that includes the following outcomes: • Development of a logically sequenced, technically accurate content outline based on the task hierarchy from the corresponding task analysis. • Development of a course strategy, including course title, description of lessons, modules, etc. • Description of instructional methods (e.g., lecture, group discussion, one-on-one, self-paced, simulation, role play, case study, on-the-job training, fieldwork, etc.) for each instructional objective. Instructional method decisions should be based on design and development principles (for example, see Table 6.2), instructional objectives, audience characteristics, and available resources. • Description of instructional media (e.g., print, visual aids, audio, audiovisual, computerized, physical objects, audience response systems, etc.) based on design and development principles, instructional objectives, audience characteristics, and available resources. • Determination of “A Plan for Development” of the instructional product, considering method and media chosen, and design and development principles. Development specifications shall include when, where, and how the following instructional elements shall be added to the instructional product: examples, drill and practice sessions, activities, illustrations, charts and diagrams, exhibits, simulations, reviews, summaries, and remediation.
SL316XCh06Frame Page 84 Monday, September 30, 2002 8:13 PM
84
Six Sigma and Beyond: The Implementation Process
TABLE 6.1 Criteria for evaluating products PRODUCT BEING EVALUATED: A-Acceptable; M-Modification Needed; NA-Not Acceptable CRITERION
DESCRIPTION
AUDIENCE
The product should meet all requirements for primary and secondary audiences. Examine readability, maturity level, cultural fit, etc.
OBJECTIVES
The product’s objectives should be similar to yours. If objectives are not explicitly stated, reconstruct each using the product’s content, test questions, exercises, etc.
TASK INFORMATION
If task steps are not explicitly identified, examine the product’s content for task information. Make sure the product’s task steps match your task analysis (TA) steps.
STORAGE MEDIUM
Your requirements may call for job aids, yet most products are geared to instruction only. Determine the adaptability of the product to your storage medium requirements.
GENERAL ANALYSIS
Review Chapter 4, Development of Materials, for a summary of learning principles. Determine the extent to which the product uses these principles.
ASSESSMENT INSTRUMENTS
Many products will not provide you with assessment instruments. Make sure tests are included and that test items match your objectives.
VALIDATION INFORMATION
Has the product resulted in real, documented performance gains? Does the research sample fit your audience?
USER REFERENCES
What do other companies or individuals within your organization think about the product?
(A) (M) (NA)
DESIGN OF JOB AIDS The supplier shall design job aid that includes the following outcomes: • Development of a logically sequenced, technically accurate content outline based on the task hierarchy from the corresponding task analysis. • Description of instructional media (e.g., print, visual aids, audio, audiovisual, computerized, physical objects, audience response systems, etc.) based on design and development principles in Table 6.2, instructional objectives, audience characteristics, and available resources.
SL316XCh06Frame Page 85 Monday, September 30, 2002 8:13 PM
Contract Training
85
TABLE 6.2 Design and development principles PRINCIPLE
DESCRIPTION
SEQUENCE MATERIAL
Gain learner attention. Inform learner of the objectives. Present desired outcome. Demonstrate desired outcome. Ask for performance. Give feedback on performance. Incorporate questions into materials. Use 70% of time devoted to discussion formats, active practice sessions, and immediate feedback. Present only one idea at a time at the appropriate level of difficulty. Use logical sequencing. Use color to draw attention, mnemonics to help retention, and multiple delivery systems to add variety and interest. Create hands-on learning experiences. Use examples/nonexamples/analogies. Elaborate on the content. Restate in greater detail or in different ways. Use overviews, summaries, and reviews. Use imagery, contrasts, and comparisons. Introduce new concepts at the beginning and go over in detail later. Develop outlines or job aids to reinforce principles and concepts learned. Connect instruction to learners’ personal or professional goals, interests, experiences, or present job. Combine new material with learners’ current knowledge base. Stress learners’ ability to be successful. Use lots of visuals. Give learners choices regarding pace, activities, etc., if possible. Provide numerous opportunities for learners to practice what they learned. Provide remediation opportunities.
MAKE IT INTERACTIVE
KEEP IT SIMPLE APPEAL TO THE SENSES
PROMOTE UNDERSTANDING AND REINFORCEMENT
PROMOTE ACCEPTANCE
PROMOTE PRACTICE
• Plan for development of the job aid based on design and development principles in Table 6.2, including determination of format (e.g., cookbook, flowchart, decision table, etc.), and inclusion of various instructional elements (e.g., illustrations, examples, charts, and diagrams). Determination shall be made as to when, where, and how each instructional element should be added to the job aid content.
DEVELOPMENT OF MATERIALS The supplier shall develop the instruction or job aid that includes the following outcomes:
SL316XCh06Frame Page 86 Monday, September 30, 2002 8:13 PM
86
Six Sigma and Beyond: The Implementation Process
TABLE 6.3 Development principles MEDIUM
DEVELOPMENT PRINCIPLE
PRINT
Place illustrations close to referenced text. Label and caption all illustrations, etc. Keep “cues” (boldface, etc.) to 10% or less of text. Place critical information either first or last in sentences or lists. Use color coding for easy access. Write procedure name at top of each page. Indent, bullet, and number steps and substeps. Use three to four sentences per paragraph. Use the same vertical and horizontal spacing throughout. Use lots of white space. Keep visuals short and simple and text large and legible, giving details on separate handout. Use no more than eight lines per visual and eight words per line. Use short titles, borders, and white space. Use the same fonts throughout, except for titles. Integrate graphics and color. Use short pauses, change volume, pitch, and pace to make key words and phrases stand out or to maintain attention level. Use short phrases and limit unwanted sounds. Make sure music does not compete or distract. Make sure narration is clear and can be heard. Refer to audio and visual sections. “Chunk” information instead of presenting a full day’s lesson plan in one session. Use neutral fashion and decor. Use bold video graphics for visibility. Program “easy access” into each lesson. Use boxes, color, highlights to direct attention. Allow learner control of pacing. Allow adequate learner response time. Limit amount of text on screen. Present one idea per screen, one or two sentences long.
VISUAL AIDS
AUDIO
AUDIO/VISUAL
COMPUTER-RELATED
• Receipt of format approval from appropriate stakeholders after development of a brief sample of the finished product, per the design or job aid plans. Use the development principles in Table 6.3 when constructing the brief sample. • Expansion of brief formats into complete rough drafts, after receiving format approval. Use prototypes to show what is intended if some features are too costly to produce in rough draft form. Refer to Table 6.4 when creating rough drafts. • Determination of technical and editorial accuracy of each rough draft, using a sample of stakeholders, sponsors, subject matter experts, and potential customers, including answers to the following information:
SL316XCh06Frame Page 87 Monday, September 30, 2002 8:13 PM
Contract Training
87
TABLE 6.4 Forms of rough drafts Medium
Examples
Form of rough draft
PRINT
Textbooks, workbooks, manuals, programmed texts, placards, and any materials for participation/practice activities. Charts, diagrams, graphs, illustrations, drawings, photographs, exhibits, projected images, overheads, slides, etc. Radio, cassettes, reel-to-reel, disc, records. Filmstrips, television, motion pictures, video, lecture, lab, or other demonstrations. Computer-based instruction, computer management of instruction, computer supported learning aide, interactive video, audience response systems. Simulated environments, job aids, etc. Group discussion, role plays, case studies, on-the-job training, field trips, internships, structured environments, oneon-one instruction
Written draft of any text materials
VISUAL AIDS
AUDIO AUDIOVISUAL
COMPUTERIZED MEDIA
PHYSICAL OBJECTS PARTICIPATION
• • • • • • • •
• • • •
Sketches with accompanying text, if any
Scripts and or musical score Storyboards, scores, and scripts
Frames, storyboards, computer programs, simulations
Simulations of rough models Outline of procedures, written draft of text
Does content match content outline? Is content sequenced logically? Is content technically accurate? Is content clear, concise, and understandable? Does content teach to the test? Does content match objectives and audience characteristics? Can FEA goals be reached given the content? Are instructional elements (examples, illustrations, etc.) properly placed and technically accurate; do they match, support, and clarify content? Is remediation provided and acceptable? Are instructions clear? Is audio/video clear, appropriate, and understandable? Are punctuation, grammar, etc., accurate?
SL316XCh06Frame Page 88 Monday, September 30, 2002 8:13 PM
88
Six Sigma and Beyond: The Implementation Process
• Determination of revisions according to one-on-one and small group reactions. • Development of final product following revision suggestions and design and development plans.
EVALUATION: PILOT TESTING The supplier shall pilot test the instruction or job aid that includes the following outcomes: • Assessment of the instructional products according to Kirkpatrick’s (1994, 1996) levels of evaluation, levels 1 and 2, including information on audience reactions (How did learners feel about the instruction/job aid?) and learning (What facts, techniques, skills or attitudes did learners understand and retain?). • Selection of a pilot test sample group based on a randomly drawn, representative sample of the total audience population. If a random sample is not feasible, choose an audience that represents an equal mix of the audience’s characteristics. • Assessment of baseline levels of audience knowledge and skill levels using assessment instruments developed in task analysis. • Delivery of the instruction or job aid according to the delivery plan (for example, if delivery is planned as individualized, deliver the instruction in an individual manner). The pilot audience shall be made aware they are participating in a pilot and that their feedback is necessary. The supplier shall provide training for those delivering the instructional product, if necessary. • Assessment of audience reactions to instruction using an audience response questionnaire (see typical content of a questionnaire in Table 6.5). • Assessment of audience reactions to job aids using questions such as: • Did the aid help in performing the job? • Did the aid make your job easier? • Did the aid help to solve any on-the-job problems? • What improvements could be made to the job aid? • Did the aid include enough information? • Assessment of learning gains using pre- and post-instruction knowledge and skill-assessment instruments developed in task analysis. Use a “90/90” measure for acceptable pilot testing results, meaning that 90% of the audience has learned 90% of the material upon completion of the instruction, or that 90% of the people successfully perform a newly learned skill 90% of the time.
DELIVERY OF MATERIALS The supplier shall deliver the instruction or job aid that includes the following outcomes:
SL316XCh06Frame Page 89 Monday, September 30, 2002 8:13 PM
Contract Training
89
TABLE 6.5 Typical audience’s response questionnaire Course name: Identify the course you are attending Instructor name: Identify the instructor’s name Date: Write the date of the training Directions for the questionnaire: How to answer the questions Program content: Several questions are asked about the program in general, including but not limited to: objectives, length, material, new information, on the job application and general understanding of the course. Instructional material: Several questions are asked about clarity and usefulness of materials. Instructional presentation: Several questions are asked about the instructor, including but not limited to: pace, time given for exercises, knowledge, response to the participant’s needs, presentation style and clarity, enthusiasm, preparation and organization, and whether or not the instructor encourages participation. General evaluation: Several questions are asked about the transfer of knowledge, expectations, future recommendations, appropriate prerequisites, appropriateness of facilities and the participant’s opinion as to whether or not the course met the organizational objectives. Demographics: Generally, this section is reserved for personal information from the participant. Typical requests are: age, education, experience in current position, experience with company, department of employment, how did you hear about the training. General comments: This section is reserved for the participant who wants to elaborate on their comments. There is space provided for: things you liked, disliked, would like more information, other comments and whether or not you need to be contacted by anyone to further discuss your concern(s). If yes, provide an e mail, or phone number.
(Note: a) questionnaires do not ask for a participant’s name and b) they are usually phrased in a Likert Type scale [1–4 or 1–6 — try to use even numbers, so that the participant may be forced to choose the direction of his or her decision. If odd number is given as a choice, the majority of people will choose the middle number. From an evaluation perspective that is not good at all because it does not communicate the true preference of the participant.] c) sometimes the responses are adjectives reflecting, for example, the agreement or disagreement with the given statement such as: strongly agree, agree, mildly agree, mildly disagree, disagree, strongly disagree, etc.)
SL316XCh06Frame Page 90 Monday, September 30, 2002 8:13 PM
90
Six Sigma and Beyond: The Implementation Process
• Description of the environment in which the instruction or job aid will be delivered (e.g., classroom, laboratory, etc.). • Description of expected patterns of use (e.g., hours, climate, season, etc.). • Determination of specifics about where, when, and how instruction or job aid will be delivered. • Development of “A Plan for Management Support.” • Development of “A Plan for Audience Acceptance” of instruction or job aids, considering audience characteristics, instructor, and environmental characteristics. • Development of “A Finalized Delivery Plan” including description of facility and sites, equipment, supplies, schedule, instructors, and miscellaneous items such as security clearances, special transportation, parking, etc. • Development of “A Plan for Ongoing Evaluation” for delivery of the instruction or job aid.
ON-THE-JOB APPLICATION Perhaps one of the most important elements for any training is the issue of transferring knowledge from training to the work place. In this stage, it is imperative to know in advance how this training will be implemented. This also will help determine the cost benefit of the training in level 4 evaluations. Therefore, the supplier shall deliver the instruction or job aid with the following outcomes: • Development of “A Plan Application to the Job” including analyzing the workplace environment. • Determining relevancy of the instructional product to the job. • Description of opportunities in instruction for practice, feedback and remediation. • Developing a learner/supervisor agreement (see Table 6.6). (Note: the learner/supervisor agreement, we admit, is hardly ever used, and that is perhaps why so many excellent training programs fail. Its intent is to sensitize both learner and supervisor to the idea that this training is not a free day from daily tasks but rather an investment for doing the tasks better or in innovative ways. This agreement puts both on notice as to what is expected and what the specific deliverables are.)
EVALUATION: POST-INSTRUCTION The supplier shall evaluate the instruction or job aid that includes the following outcomes: • Development of “A Plan for Evaluation,” including specific learner or organizational changes to be measured, specific data collection instruments, research design (see Table 6.7), and estimated cost of evaluation.
SL316XCh06Frame Page 91 Monday, September 30, 2002 8:13 PM
Contract Training
TABLE 6.6 Learner/supervisor post-instructional agreement EXPECTED POST-INSTRUCTIONAL BEHAVIOR RESOURCES NEEDED TO ATTAIN EXPECTED BEHAVIOR SUPERVISORY SUPPORT NEEDED OPPORTUNITIES FOR PERFORMANCE EXPECTED OBSTACLES DEADLINE FOR EXPECTED PERFORMANCE EMPLOYEE COMMENTS SUPERVISOR COMMENTS SIGNATURES:
TABLE 6.7 Research design action plan DESIRED CHANGE TO BE MEASURED MEASUREMENT TOOL(S) SAMPLE SOURCE SAMPLE SIZE SAMPLE LOCATION DATA COLLECTION TIME PLAN DATA COLLECTION PERSONNEL STATISTICAL ANALYSIS PROCEDURES STATISTICAL ANALYSIS TIME PLAN STATISTICAL ANALYSIS PERSONNEL
• Costs-benefit analysis of performing the post-instructional evaluation. • Development of applicable measurement tools. • A final report including what program was evaluated and why, how the evaluation was conducted and by whom, findings and implications (including statistical analyses), recommendations based on findings, implications, budget and time constraints, and plan of action. • Development of “A Plan for Continuous Improvement.”
91
SL316XCh06Frame Page 92 Monday, September 30, 2002 8:13 PM
92
Six Sigma and Beyond: The Implementation Process
REFERENCES Kirkpatrick, D. L. (1994). Evaluating Training Programs: The Four Levels. Berrett-Koehler. San Francisco. Kirkpatrick, D. L. (January 1996). Revisiting Kirkpatrick’s four level model. Training and Development Journal. Pp. 54–59.
SELECTED BIBLIOGRAPHY Bailey, R. W. (1982). Human Performance Engineering: A Guide for System Designers. Prentice Hall. Englewood Cliffs, NJ. Brinkerhoff, R. O. (1987). Achieving Results from Training. Jossey-Bass. San Francisco Kaufman, R. and F. W. English (1979). Needs Assessment: Concept and Application. Educational Publications. Englewood Cliffs, NJ. Kirkpatrick, D. L. (November–December 1959/January–February 1960). Techniques for evaluating training programs. Journal of the American Society of Training Directors. pp. 34–36; 54–57. Knowles, M. S. (1980). The Modern Practice of Adult Education: From Pedagogy to Andragogy. Follert. Chicago. Stamatis, D. H. (1997). TQM Engineering Handbook. Marcel Dekker. New York.
SL316XCh07Frame Page 93 Monday, September 30, 2002 8:12 PM
Part II Training for the DMAIC Model
SL316XCh07Frame Page 94 Monday, September 30, 2002 8:12 PM
SL316XCh07Frame Page 95 Monday, September 30, 2002 8:12 PM
7
Six Sigma for Executives
The intent of the executive training is to give executives an overview of the Six Sigma methodology. It is geared toward the leadership of the organization who will either approve the program in the organization or review and manage it. As a consequence, the focus is on a very high-level explanation of the methodology and the expectations and hardly any specificity about tools. Some focus is given to the significance of the project; however, even this includes no detail. It is often suggested that simple exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may be used to define a process, to explain the cost of quality in the organization, to identify the customer, etc. A central issue for this training is the notion of customer satisfaction and organizational profitability. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this executive session. It should last 1 day (sometimes 2), and the level of difficulty depends on the participants. The detailed information may be drawn from the first six volumes of this series. A typical executive program may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for this stage of training, the material may be overwhelming and quite often unnecessary.
INSTRUCTIONAL OBJECTIVES — EXECUTIVES RECOGNIZE CUSTOMER FOCUS • Provide a definition of the term Customer Satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Interpret the expression y = f(x).
BUSINESS METRICS • Define the nature of a performance metric. • Identify the driving need for performance metrics. • Provide a listing of several key performance metrics.
95
SL316XCh07Frame Page 96 Monday, September 30, 2002 8:12 PM
96
Six Sigma and Beyond: The Implementation Process
• • • • • • • • •
•
Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma Rate of Improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization. State some problems (or severe limitations) inherent in the current cost-of-quality theory. Identify and define the principal categories associated with quality costs. Compute the cost-of-quality (COQ) given the necessary background data. Provide a detailed explanation of how a defect can impact the classical COQ categories. Explain the benefit of plotting performance metrics on a log scale.
SIX SIGMA FUNDAMENTALS • • • • • • • • • • • • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Understand the role of questions in the context of management leadership. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and six sigma. Recognize that defects arise from variation. Define the three primary sources of variation in a product. Describe the general methodologies that are required to progress through the hierarchy of quality improvement. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a foursigma business. Understand the key success factors related to the attainment of Six Sigma. Understand why inspection and test are nonvalue-added to a business and serve as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Provide a very general description of how a process capability study is conducted and interpreted. Understand the basic elements of a sigma benchmarking chart. Interpret a data point plotted on a sigma benchmarking chart. Understand the difference between the idea of benchmark, baseline, and entitlement cycle time.
SL316XCh07Frame Page 97 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
• Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work-in-process (WIP) is highly correlated to the rate of defects. • Rationalize the statement: the highest quality producer is the lowest cost producer. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general findings that tend to characterize or profile a four sigma organization. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Provide a qualitative definition and graphical interpretation of the standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Interpret an array of sigma benchmarking charts. • Define the three primary sources of variation in a product. • Provide a very general description of how a process capability study is conducted and interpreted. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and how such defects can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Define the two primary components of process breakthrough.
97
SL316XCh07Frame Page 98 Monday, September 30, 2002 8:12 PM
98
Six Sigma and Beyond: The Implementation Process
• Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from a quality, cost, and cycle-time point of view. • Understand that the term sigma is a performance metric that applies only at the opportunity level.
DEFINE NATURE
OF
VARIABLES
• Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy.
OPPORTUNITIES
FOR
DEFECTS
• Provide a rational definition of a defect. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Recognize the difference between uniform and random defects. • Compute the defect-per-unit metric given a specific number of defects and units produced.
CTX TREE • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality.
PROCESS MAPPING • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map.
SL316XCh07Frame Page 99 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
PROCESS BASELINES Nothing specific
SIX SIGMA PROJECTS • Explain why the five key planning questions are so important to project success. • Create a set of criteria for selecting and scoping Six Sigma black belt projects.
SIX SIGMA DEPLOYMENT • • • • • • • • • • • •
Provide a brief description of the nature of a Six Sigma black belt (SSBB). Provide a brief description of the nature of a Six Sigma champion (SSC). Describe the roles and responsibilities of a Six Sigma champion. Provide a brief description of the key implementation principles and identify principle deployment success factors. List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. Develop a business model that incorporates and exploits the benefits of Six Sigma. Describe the role and responsibilities of a Six Sigma black belt. Recognize the importance of, and provide a description for, the plan-train-apply-review (PTAR) learning process. Provide a brief description of the nature of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a Six Sigma master black belt. Understand the Six Sigma black belt instructional curriculum.
MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics.
DATA COLLECTION • Provide a specific explanation of what is meant by the term replicate in the context of a statistically designed experiment. Measurement Error Nothing specific
99
SL316XCh07Frame Page 100 Monday, September 30, 2002 8:12 PM
100
Six Sigma and Beyond: The Implementation Process
Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation, given a set of data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Compute the mean, standard deviation, and variance for a set of normally distributed data. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. Analyze Six Sigma Statistics • Identify the key limitations of the performance metric Final Yield (i.e., output/input). • Identify the key limitations of the performance metric First-Time Yield (Y.ft). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric Rolled-Throughput Yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Provide a brief description of how one would implement and deploy the performance metric Rolled-Throughput Yield (Y.rt). • List at least five separate sources that could offer the data necessary to estimate a sigma capability.
SL316XCh07Frame Page 101 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
101
Process Metrics Nothing specific Diagnostic Tools Nothing specific Simulation Tools Nothing specific Statistical Hypotheses Nothing specific
CONTINUOUS DECISION TOOLS • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact should they be violated.
DISCRETE DECISION TOOLS • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied.
IMPROVE EXPERIMENT DESIGN TOOLS • Provide a general description of what a statistically designed experiment is and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative.
SL316XCh07Frame Page 102 Monday, September 30, 2002 8:12 PM
102
Six Sigma and Beyond: The Implementation Process
ROBUST DESIGN TOOLS • Explain what is meant by the term robustness and explain how this understanding translates to experimental design and process tolerancing.
EMPIRICAL MODELING TOOLS Nothing specific
TOLERANCE TOOLS Nothing specific
RISK ANALYSIS TOOLS • Demonstrate how the Six Sigma Risk Assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data.
DFSS PRINCIPLES • Understand the fundamental ideas underlying the notion of manufacturability.
CONTROL PRECONTROL TOOLS • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented.
CONTINUOUS SPC TOOLS • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring.
DISCRETE SPC TOOLS Nothing specific
OUTLINE OF ACTUAL EXECUTIVE TRAINING CONTENT — 1 DAY Based on the above general objectives, it is recommended that the training follow the content format described below. By no means is this the only format possible. In fact, we provide two options. The first is the traditional 1-day orientation, and the second is a 2-day overview with more details. However, we believe that the
SL316XCh07Frame Page 103 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
103
content for both options follows a hierarchical sequence, and in this way we have attempted to accommodate the learning process. (The reader will notice that for the executive training, we make no distinction between transactional, technical, or manufacturing in the training because the people responsible (the executives) are one and the same for all three categories. Therefore, the material of the training is the same.) Introductions Agenda Ground rules Exploring Our Values
MAXIMIZE CUSTOMER VALUE The value of delivering outstanding quality consistently.
MINIMIZE PROCESS COSTS Dramatically reduce waste and inefficiency. In other words, Six Sigma properly applied helps your company achieve operational excellence. Improperly applied, it becomes the “program-of the-month” that fails to fully engage the commitment of valuable resources.
SIX SIGMA LEADERSHIP Six Sigma success starts at the top with managers and leaders who understand that Six Sigma is more than statistical tools and black belts but a philosophy of organizational profitability and customer satisfaction. How Six Sigma can and should be applied in your business environment. The resources needed to build your Six Sigma infrastructure. Actions required to achieve short- and long-term Six Sigma success.
THE SIX SIGMA DMAIC MODEL Define Measure Analyze Improve Control
HOW SIX SIGMA FITS What Six Sigma is and is not. How do I know Six Sigma is right for my organization?
SL316XCh07Frame Page 104 Monday, September 30, 2002 8:12 PM
104
Six Sigma and Beyond: The Implementation Process
LEADERSHIP PREREQUISITES Communicating the vision, strategies, and plans. Developing an operational excellence strategy. Establishing metrics to drive and gauge continuous improvement.
DEPLOYMENT INFRASTRUCTURE Project selection Candidate selection Roles and responsibilities Champions, black belts, green belts, HR, finance Training and project support logistical considerations
SUSTAINING
THE
GAINS
Creating a learning organization. Establishing a knowledge sharing discipline.
PROJECT REVIEW GUIDELINES If time permits, it is strongly suggested to review the project guidelines. They are: Define/Measure • Identify CTQs • Ys (KPOVs) and Xs (KPIVs) • C&E matrix • C&E diagram • Data collection plan • Measurement system analysis • Pareto • Histogram/box plot • Process baseline (performance) • Process entitlement • Capability • FMEA • COPQ Analyze • Benchmarking • Multivariate study • Hypothesis testing • Regression analysis • Sample t test • Sample t test
SL316XCh07Frame Page 105 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
• • • • •
105
Analysis of variance (both means and variances) Analysis of means Proportion test Chi square ID key factors
Improve • ID KPIV levels • Choose experimental design • Fractional factorial • Full factorial • Replication • Main effects plot • Interactions plot Control • Control charts • FMEA • Cost review
ALTERNATIVE SIX SIGMA EXECUTIVE TRAINING — 2 DAYS Introductions Agenda Ground rules Exploring our values • Comparing value systems • Behavior and values • Improving business performance by improving quality and consistently meeting customer expectations
MEASUREMENT • Measuring inputs, not just outputs • Reducing defects, by improving process and product, to help achieve business objectives • Measurements get attention • Performance metrics reporting • What do we measure now? • What numbers get the most attention in your area? • What quality measurements do we have? • How do we use these measures? Critical to satisfaction
SL316XCh07Frame Page 106 Monday, September 30, 2002 8:12 PM
106
Six Sigma and Beyond: The Implementation Process
MAXIMIZING
THE
CUSTOMER SUPPLIER RELATIONSHIP
• Deriving value from the need–do interaction • Maximizing the interaction • Supplier strives for performance on cycle time, cost and defects to meet customers’ increasing expectations on delivery, price and quality. • Linking customer needs and what we do • The overall perspective...
THE
CLASSICAL VS. THE
• • • • • • • • •
SIX SIGMA
PERSPECTIVE OF YIELD
Measuring first-pass yield Final yield (Yfinal) First-time yield (YFT) Rolled-throughput yield Product A is produced in three consecutive (independent) steps Calculating normalized yield Normalized yield is the average yield-per-step of a sequential process... Six Sigma breakthrough challenge The hidden factory and rolled yield
TRADITIONAL YIELD VIEW THE TWO TYPES
OF
DEFECT MODELS
• Uniform defect: the same type of defect appears within a unit of product; e.g., wrong type of steel. • Random defect: the defects are intermittent and unrelated; e.g., flaw in surface finish.
PROCESS CHARACTERIZATION • Mean: arithmetic average of a set of values • Variance: the difference between the average and the measurement squared • Standard deviation: the square root of the variance. As the standard deviation increases, DPM increases. • Normal distribution: behavior of a process in the long term
THE
FOCUS OF
SIX SIGMA —
CUSTOMER SATISFACTION AND
ORGANIZATIONAL PROFITABILITY
• Y = f(x) • The leverage principle • Three variation reduction strategies
SL316XCh07Frame Page 107 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
107
• Six Sigma breakthrough strategy • DMAIC • Define • Measure • Analyze • Improve • Control
DEFINITION
OF A
ROLES
RESPONSIBILITIES
AND
PROBLEM
• Roles of an executive • Establish the vision — why are we doing Six Sigma? • Articulate the business strategy — how does Six Sigma support the business strategy? • Provide resources • Remove roadblocks/buffer conflicts
ROLES
OF A
CHAMPION
• • • • •
Develop a vision for the organization Create and maintain passion Develop a model for a perfect organization Facilitate the identification and prioritization of projects Develop the strategic decisions in the deployment of Six Sigma around timing and sequencing of manufacturing, transactional, and new-product focus • Extend project benefits to additional areas • Communicate and market the breakthrough strategy process and results
ROLES • • • • • • • • • • • •
OF THE
MASTER BLACK BELT
Be the expert in the tools and concepts Develop and deliver training to various levels of the organization Certify the black belts (BBs) Assist in the identification of projects Coach and support BBs in project work Participate in project reviews to offer technical expertise Partner with the champions Demonstrate passion around Six Sigma Share best practices Take on leadership of major programs Develop new tools or modify old tools for application Understand the linkage between Six Sigma and the business strategy
SL316XCh07Frame Page 108 Monday, September 30, 2002 8:12 PM
108
Six Sigma and Beyond: The Implementation Process
ROLES • • • • • • • • • • • • • • • • • • • •
OF THE BLACK BELT
Knowledgeable of the breakthrough strategy application Prepare initial project assessment to validate benefits Lead and direct the team to execute projects Determine most effective tools to apply Show the data Identify barriers Identify project resources Get input from knowledgeable functional experts/team leaders/coaches Report progress to appropriate leadership levels Present the final report Deliver results on time Solicit help from champions when needed Influence without direct authority Be a breakthrough strategy enthusiast Stimulate champion thinking Teach and coach breakthrough strategy methods and tools Manage project risk Insure the results are sustained Document learning Prerequisites • Process/product knowledge • Willing and able to learn mathematical concepts • Knows the organization • Communication skill • Self starter/motivated • Open-minded • Eager to learn new ideas • Desire to drive change • Project leadership skills • Team player • Respected by others • Track record on results • Knowledgeable in breakthrough strategies • Results oriented
(Emphasis must be given to the notion of investment vs. return, since All black belts drive large cost and capacity improvements — an average of $200,000+ per project. Therefore, for a successful black belt project, involvement/ownership by the plant/support functions are critical!) How can executives accelerate the change process? The following points should be considered and discussed thoroughly: • Six Sigma breakthrough lessons learned: Six Sigma is a methodology to provide breakthrough results. However, for the breakthroughs and results
SL316XCh07Frame Page 109 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
•
•
•
•
THERE
to continue there are constant barriers and challenges to breakdown or to overcome! For example: After 9 months 20–25% of all black belts typically are not working on projects • Reasons: high promotion rate • Enticed with $ from suppliers • Did one or two projects and went back to their original jobs All black belt projects were successful, but only 70% of the dollars could be tracked to the bottom line • Reasons: many “cost avoidance” projects • Finance was not involved in project selection/tracking • Projected savings were used to mask other operating issues • Projects were too future based (product line 6–9 months out) • Management did not act on breakthrough — people, inventory, bill of materials The majority of suppliers are not at a five sigma capability or will be in near future • Reasons: lack the financial resources for Six Sigma black belt training • No incentive to dedicate resources • Lack the talent to dedicate as BBs • Cannot afford to ship all BBs to suppliers • Many sites complaining that there were too many initiatives TQ, Six Sigma, materials, customer excellence, technical excellence, etc.. • Reasons: the site management teams did not have a clear understanding of the individual “tools” to use — Six Sigma, DFM, supplier partnership, customer satisfaction, etc. • Six Sigma progressive ARE FIVE ACTIONS THAT HAVE PROVEN CRITICAL
TO CONTINUED
• • • • •
109
SIX SIGMA
BREAKTHROUGH
The need for renewal every 9–10 months Senior management commitment and involvement Site leadership training/alignment Black belt dedication to projects for 2 years Supplier improvement
SIX SIGMA
BREAKTHROUGH
• Continuing the momentum • Moving from three to four sigma is based on improving fundamentals • Moving from four to five to Six Sigma is based on Six Sigma breakthrough strategy • The DMAIC model • Define
SL316XCh07Frame Page 110 Monday, September 30, 2002 8:12 PM
110
Six Sigma and Beyond: The Implementation Process
• • • •
Measure Analyze Improve Control
DEFINE PURPOSE • To identify the customers and their CTQs — critical to quality • To define the project scope and team charter • To map the process to be improved
QUESTIONS • • • • •
• • • • • •
TO BE
ANSWERED
Who are my customers and what is important to them (CTQ)? What is the scope of the project? What is the problem being addressed? What defect am I trying to reduce? What data has been collected to understand the customer requirements? What are the boundaries of this project? To what extent are the team roles and goals clearly understood and accepted? Are the key milestones and timelines established? Where do we currently take measurements? When, where, and to what extent does the problem occur? What is my process? How does it function? How was the process map validated? Are multiple versions necessary to account for different types of inputs? Why are you focusing on this project? What is the current cost of defects (poor quality)? What are the business reasons for completing this project? Are they compelling to the team? Are they compelling to the key stakeholders? How will you know if the team is successful? What is the goal of this project? Is the goal achievable?
A TYPICAL CHECKLIST • • • • • • • • •
FOR THE
DEFINE PHASE
Have the customers been identified? Have the data to verify customers needs been collected? Has voice of the customer (VOC) been accounted for? Has the team charter been formulated? Have all the operational definitions been identified and agreed upon? Has the problem statement been understood and agreed upon? Has the goal statement been defined and agreed upon? Is the project scope appropriate and applicable? Has it been approved? Is the time line for the project appropriate and applicable? Has it been approved?
SL316XCh07Frame Page 111 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
111
• Are the financial benefits real and agreed upon? • Has the high level process map “as is” been defined and agreed upon?
TOOLS • • • • •
Process mapping — SIPOC CTMatrix Project scope contract Gantt chart Change management
MEASURE PURPOSE • To develop process measures (dependent variables or Ys) that will enable you to evaluate the performance of the process • To determine the current process performance and entitlement and assess it against the required performance • To identify the input variables that cause variation in process performance — Y
QUESTIONS
TO BE
ANSWERED
• Who are the suppliers to the process? • What are the process and output measures that are critical to understanding the performance of this process? • What are the performance standards for Y? • What is the link to the CTQ? • What are the defects for this project? • What are the primary sources of variability for this process? Are they control or noise variables? • What are the SOPs associated with each control variable? • Where will you collect data? What is your data-collection plan? How much data did you collect? • Is your ability to measure/detect “good enough?” • When did you sample? • How did you ensure you eliminated the influences of assignable causes within your rational subgroups? • How did you ensure that you included all the sources of variation between your rational subgroups? • Why is the project being addressed? • Have you created a shared need? • How is the process performing? • What is the current process sigma level for this project?
SL316XCh07Frame Page 112 Monday, September 30, 2002 8:12 PM
112
Six Sigma and Beyond: The Implementation Process
• What is the best that the process was designed to do? • What are the defect reduction goals for this project? • Have you found any “quick hit” improvements?
TYPICAL CHECKLIST • • • • • • •
FOR THE
MEASURE PHASE
Have the key measurements been identified? Has the rolled-throughput yield been calculated? Have the defects been identified? Has the data-collection plan been identified? Has the measurement capability study (GR&R) been completed? Have the baseline measures of process capability been addressed? Have the defect reduction goals been established and agreed upon?
TOOLS • • • • • • •
Process mapping Cause-and-effect diagram Cause-and-effect matrix FMEA GR&R Graphical techniques (run chart, control chart, histogram, etc.) Change management
ANALYZE PURPOSE • To prioritize the input variables that cause variation in process performance — Y • To analyze the data to determine root causes and opportunities for improvement • To validate the key process input variables with data
QUESTIONS • • • • • •
TO BE
ANSWERED
Who is the process owner? What are all the key process input variables? Have you found any “quick-hit” improvements? What resistance have you experienced or do you anticipate? Where were data collected on the inputs? When you realize the opportunities represented by addressing the problem, what are the quantifiable benefits over your current process performance (COPQ)? • Why does the output of your process vary? • What are the inputs that matter most?
SL316XCh07Frame Page 113 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
113
• How have you analyzed the data to identify the vital few factors that account for variation in the process? • How were the KPIVs from your C&E diagram verified? • What are the root causes of the problem?
TYPICAL CHECKLIST
FOR THE
ANALYZE PHASE
• Has the detailed “as is” process map been completed? • Have all sources of variation been identified and the prioritization initiated? • Have the SOPs been reviewed and revised as appropriate? • Is the usage and display of data appropriate and applicable to identify and verify the “vital few” (KPIVs)? • Has the problem statement been refined through an iteration process to reflect the increased understanding of the problem? • Have there been estimates of the quantifiable opportunity represented by the problem?
TOOLS • Process map • Graphical techniques (run chart, control chart, histogram, Pareto, scatter diagram, etc.) • Multivariate studies • Hypothesis testing • Correlation and regression • Change management
IMPROVE PURPOSE • To generate and validate improvements by setting the input variables to achieve the optimum output. • To determine Y = f(x…)
QUESTIONS • • • •
TO BE
ANSWERED
Who is impacted by the change? How are they impacted? What day-to-day behaviors will need to change? What criteria did you use to evaluate potential solutions? What things have been considered to manage the cultural aspects of the change? • What has been done or will be done to mobilize support and deal with resistance?
SL316XCh07Frame Page 114 Monday, September 30, 2002 8:12 PM
114
Six Sigma and Beyond: The Implementation Process
• What changes need to be made to rewards, training, structure, measurements, etc. to sustain the change? • Where was the solution validated? • When will the solution be implemented? • What is the implementation/communication plan? • Why was this solution chosen? • What are the potential problems with the plan? • How was an experiment or simulation conducted to ensure the optimum solution was found? How does the solution address the root cause?
TYPICAL CHECKLIST
FOR THE IMPROVE
PHASE
• Have there been solution alternatives to the problem? Is the one that best addresses the root cause the one that has been selected? • Has the “should be” process map been developed? • Have the key behaviors required by the new process been identified? • Has the cost-benefit analysis of the proposed solution been completed? • Has the solution been validated? • Has an implementation plan been developed? • Has a communication plan been established?
TOOLS • • • • •
Process map Design of experiments Simulation Optimization Change management
CONTROL PURPOSE • To institutionalize the improvement and implement ongoing control • To sustain the gains
QUESTIONS
TO BE
ANSWERED
• Who maintains the control plan? • How will responsibility for continued monitoring and improvement be transferred from the team to the owner? • What controls are in place to ensure that the problem does not recur? • Where is the data being collected? What control charts are being used? What evidence is there that the process is in control? • When will the data be reviewed?
SL316XCh07Frame Page 115 Monday, September 30, 2002 8:12 PM
Six Sigma for Executives
115
• When will the final report be completed? • Why is the control plan effective? • How has job training been affected? What are the biggest threats to making this change last? • What next? • Who is looking for translation opportunities (direct, customization, adaptation)? • What is the next problem that should be addressed in the context of this overall process? • What are some other areas of the business that could benefit from your learning? • When will the learning be shared with the other business areas? • Why is it likely to succeed? • How will the translation opportunities be communicated? • What did you as a team learn about the process of making Six Sigma improvements?
TYPICAL CHECKLIST
FOR THE
CONTROL PHASE
• • • • •
Has the control plan been completed? Is there evidence that the process is in control? Is there appropriate and applicable documentation of the project? Have translation opportunities been identified? Have the systems and structure changes been significant to institutionalize the improvement? • Have the audit plans been completed? • Has there been a poka yoke (mistake proofing) in the process? • Is there a preventive maintenance program in place?
TOOLS • • • • •
Control plans Statistical process control Gage control plan Appropriate and applicable techniques Change management
SIX SIGMA — THE INITIATIVE PROCESS — SYSTEMATIC APPROACH TO REDUCING DEFECTS THAT AFFECT WHAT IS IMPORTANT TO THE CUSTOMER • Tools — qualitative, statistical, and instructional devices for “observing” process variables and their relationships as well as “managing” their character
SL316XCh07Frame Page 116 Monday, September 30, 2002 8:12 PM
116
Six Sigma and Beyond: The Implementation Process
Six Sigma... the Practical Sense • The classical view of performance • The magnitude of difference: a different approach for the business — the goals of Six Sigma • Defect reduction • Yield improvement • Improved customer satisfaction • Higher net income
FOUNDATION
OF THE
TOOLS
• Qualitative • Quantitative
GETTING • • • •
TO
SIX SIGMA
How far can inspection get us? The impact of added inspection Using statistics to get us there How do we measure variation and quality?
THE STANDARD DEVIATION • • • •
Normal distribution data Variable Attribute Black belt certification program
SL316XCh08Frame Page 117 Monday, September 30, 2002 8:11 PM
8
Six Sigma for Champions
The intent of champion training is to give selected executives a general understanding and familiarization of the Six Sigma methodology. It is geared toward the leadership of the organization who either facilitate the logistics (approve, review and/or manage the project; after all, the champion makes sure that the appropriate help and resources are available to the master black belts and black belts in pursuing process improvement) as well as mediate conflict in the process of the Six Sigma diffusion process in the organization. As a consequence, the focus is on a high-level explanation of the methodology and expectations, while there is little discussion about tools. Great focus is given to the significance of the project, with significant detail about how to select and define it and about what questions to ask as the project progresses. To be sure, the material for this training is at a high level more in terms of understanding the process and the requirements of Six Sigma methodology. A project champion is not expected to do a project; however, he is expected to understand the process and provide support as well as eliminate bottlenecks, especially when multiple departments are involved. In our estimation, the emphasis of this training should be on why as opposed to how. A project champion must be familiar with the process but also must understand the foundations of the approach in such a way that he or she may ask the right questions. His understanding should be on such a level that if he needs to explain the project to a green belt, his explanation would pass muster for the executive level as well. Of course, the opposite should also hold true. It is often suggested that simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may include defining a process and coming up with ways to improve that process; defining five to ten operational definitions in that process; working with some variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE set-ups; running an experiment with software; and others. However, a central issue for this training is the notion of customer satisfaction and organizational profitability. Because organizations and their goals are quite different we will provide the reader with a suggested outline of the training material for this champion session. It should last 5 days and be taught by a master black belt or an outside consultant. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series. In a typical champion program, we may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for that stage of training, the material may be overwhelming and quite often unnecessary:
117
SL316XCh08Frame Page 118 Monday, September 30, 2002 8:11 PM
118
Six Sigma and Beyond: The Implementation Process
CURRICULUM OBJECTIVES FOR CHAMPION TRAINING RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x); y = f(x,n). • Interpret the expression y = f(x); y = f(x,n). Business Metrics • • • • • • • • • • • • • •
Define the nature of a performance metric. Identify the driving need for performance metrics. List at least six key performance metrics. Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization. State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. Identify and define the principal categories associated with quality costs. Compute the COQ given the necessary background data. Provide a detailed explanation of how a defect can impact the classical cost-of-quality categories. Explain the benefit of plotting performance metrics on a log scale.
Six Sigma Fundamentals • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Understand the role of questions in the context of management leadership. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and Six Sigma. Recognize that defects arise from variation.
SL316XCh08Frame Page 119 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
119
• Describe the general methodologies that are required to progress through the hierarchy of quality improvement. • Define the phases of breakthrough in quality improvement. • Identify the values of a Six Sigma organization as compared to a four sigma business. • Understand the key success factors related to the attainment of Six Sigma. • Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. • Understand the difference between the terms process precision and process accuracy. • Understand the basic elements of a sigma benchmarking chart. • Interpret a data point plotted on a sigma benchmarking chart. • Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work in process (WIP) is highly correlated to the rate of defects. • Rationalize the statement: the highest-quality producer is the lowest-cost producer. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Understand that global benchmarking has consistently revealed four sigma as average, while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general characteristics or profile of a four sigma organization. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Provide a qualitative definition and graphical interpretation of standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Define the three primary sources of variation in a product. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process.
SL316XCh08Frame Page 120 Monday, September 30, 2002 8:11 PM
120
Six Sigma and Beyond: The Implementation Process
• Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Define the two primary components of process breakthrough. • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough. • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Understand that the term sigma is a performance metric that applies only at the opportunity level. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Explain the interrelationship between the terms process capability, process precision, and process accuracy. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from the points of view of quality, cost, and cycle-time.
DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. Opportunities for Defects • Provide a rational definition of a defect. • Recognize the difference between uniform and random defects. • Compute the defect-per-unit metric given a specific number of defects and units produced. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities.
SL316XCh08Frame Page 121 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
121
CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines Nothing specific. Six Sigma Projects • Explain why the five key planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Create a set of criteria for selecting and scoping Six Sigma black belt projects. • Define a Six Sigma black belt project reporting and review process. • Interpret each of the action steps associated with the four phases of process breakthrough. Six Sigma Deployment • • • • • • • •
Provide a brief description of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Understand the SSBB instructional curriculum. Recognize the importance and provide a description of the plan-train-apply-review (PTAR) learning process. Provide a brief description of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a SSMBB.
SL316XCh08Frame Page 122 Monday, September 30, 2002 8:11 PM
122
Six Sigma and Beyond: The Implementation Process
• Provide a brief description of the key implementation principles and identify the principal deployment success factors. • List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. • Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. • Develop a business model that incorporates and exploits the benefits of Six Sigma. • Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy.
MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics. Data Collection • Provide a specific explanation of the term replicate in the context of a statistically designed experiment. • Explain why there is a need to randomize the sequence of order in which an experiment takes place and what can happen when this is not done. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal. Static Statistics • Provide a qualitative definition and graphical interpretation of variance. • Compute the sample standard deviation given a set of data.
SL316XCh08Frame Page 123 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
123
• Compute the mean, standard deviation, and variance for a set of normally distributed data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. • Compute Z.usl and Z.lsl for a set of normally distributed data and then determine the probability of defect. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Compute and interpret the total, inter-, and intragroup sums of squares for a given set of data. • Explain the difference between dynamic mean variation and static mean offset. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process centering conditions and explain how each impacts process capability. • Compute and interpret the intra-, inter, and total sums of squares for a set of normally distributed data organized into rational subgroups.
ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). • Understand the impact of process capability and complexity on the probability of zero defects. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities.
SL316XCh08Frame Page 124 Monday, September 30, 2002 8:11 PM
124
Six Sigma and Beyond: The Implementation Process
• Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • List at least five separate sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product or process. • Illustrate how a system-level DPU goal can be flowed down through a product or process hierarchy to assess the required CTQ capability. • Illustrate how a series of CTQ capability values can be flowed up through a product or process hierarchy to establish the system DPU. Process Metrics • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. • Compute and interpret the Cp index of capability. • Compute and interpret the Cpk index of capability. • Explain the theoretical and practical differences between Cp, Cpk, Pp, and Ppk. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups. • Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, and Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions • Show how Z.st, Z.lt, Z.shift (dynamic), and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret the standardized computer output report. Diagnostic Tools • Understand, construct, and interpret a multivariate chart, then identify areas of application. Simulation Tools • Describe what is meant by the term Monte Carlo simulation and demonstrate how it can be used as a design tool. • Create a series of random normal numbers with a given mean and variance.
SL316XCh08Frame Page 125 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
125
Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and describe the benefits of doing so. • Explain what a statistical hypothesis is, why it is created, and show the forms it may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequence. • Define the concept of statistical confidence and explain how it relates to alpha risk. • Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain what is meant by the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta. • Explain what is meant by the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as practical perspective. • List the essential steps for successfully conducting a statistically based investigation of a practical real-world problem. • Provide a detailed explanation of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a conceptual explanation of statistical confidence interval and how it relates to the notion of random sampling error. • Understand what the distribution of sample averages is and how it relates to the central limit theorem. • Explain what the standard error of the mean is and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations.
SL316XCh08Frame Page 126 Monday, September 30, 2002 8:11 PM
126
Six Sigma and Beyond: The Implementation Process
• Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand what the distribution of sample differences is and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of a one- and two-sample t test and apply this test to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution. • Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal. • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact it they are violated. Discrete Decision Tools • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution. • Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied.
IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative.
SL316XCh08Frame Page 127 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
127
• Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain what is meant by the term full factorial experiment and how it differs from a fractional factorial experiment. Robust Design Tools • Explain briefly the term robust design and why and when process capability data must be factored into the design process. • Explain what is meant by the term robustness and how this understanding translates to experimental design and process tolerancing. • Provide a statistical explanation of the term heteroscedasticity and discuss its practical implications. Empirical Modeling Tools Nothing specific. Tolerance Tools • Demonstrate why worst-case tolerance analysis is an overly conservative and costly design tool. • Create a graphical explanation of how performance tolerances can be defined using the results of a two-level factorial experiment. Risk Analysis Tools • Compute the standard deviation for a linear sum of variances and explain why the variances must be independent. • Compute the system-level defect probability given the subsystem means, variances (of a linear model), and relevant performance specifications. • Describe how root sum of squares (RSS) can be used as a design-to-cost tool and how it can be employed to analyze and optimize process cycle time. • Demonstrate how the Six Sigma risk assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data. DFSS Principles • Understand the fundamental ideas underlying the notion of manufacturability. • Understand how statistically designed experiments can be used to identify leverage variables, establish sensitivities, and define tolerances.
SL316XCh08Frame Page 128 Monday, September 30, 2002 8:11 PM
128
Six Sigma and Beyond: The Implementation Process
• Understand how product and process complexity impacts design performance. • Explain the concept of error propagation (both linear and nonlinear) and what role product and process complexity plays. • Describe how reverse error propagation can be employed during system design. • Explain why process shift and drift must be considered in the analysis of a design and how it can be factored into design optimization. • Describe how Six Sigma tools and methods can be applied to the design process. • Discuss the pros and cons of the classical approach to product and process design relative to that of the Six Sigma approach.
CONTROL Precontrol Tools • Develop a precontrol plan for a given CTO and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts. Continuous SPC Tools • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. Discrete SPC Tools Nothing specific.
SIX SIGMA PROJECT CHAMPION TRANSACTIONAL (GENERAL BUSINESS AND SERVICE — NONMANUFACTURING) TRAINING Based on the above general objectives, it is recommended that the training follow the content format given below. By no means is this the only format. In fact, we provide three options. The first is transactional training, the second technical training, and the third manufacturing training. All three options follow a hierarchical sequence, and we have attempted to accommodate the learning process. The distinction of the three categories is to emphasize the need for Six Sigma in nonmanufacturing (service organizations), research and development groups (the Six Sigma methodology should be applied in the development stage, never in the research stage), and, of course, in the manufacturing areas. It is also very important to note
SL316XCh08Frame Page 129 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
129
that the objectives for all categories are the same. However, the different training formats emphasize different elements of the methodology. Introductions Agenda Ground rules Exploring our values Objectives Definition: the transactional approach is based on the customer, the opportunity, and the successes. (It must be stressed that we will be applying Six Sigma methodology rather than following the “pack.” Make sure that emphasis is placed on the process and the tools. In the case of the process, participants must recognize that Six Sigma methodology should be followed systematically, as should attempts to reduce nonconformances that are important to the customer. With respect to the tools, participants must recognize that qualitative as well as quantitative techniques may be employed to resolve issues.) In other words: • • • •
Know what is important to the customer. Reduce nonconformances. Center around the target. Reduce variation.
SIX SIGMA BREAKTHROUGH GOAL • A solution for improving company value • A business strategy for net income improvement • A means to enhance customer perception
SIX SIGMA GOAL Defect reduction – why is it important to focus on cost of poor quality (COPQ) Yield improvement Improved customer satisfaction and higher return on investment — the importance of learning faster than our competitors is the only sustainable advantage. This is the reason why Six Sigma methodology emphasizes breakthrough improvements rather than incremental ones.
COMPARISON
BETWEEN
THREE SIGMA
AND
SIX SIGMA QUALITY
SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma, after the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only
SL316XCh08Frame Page 130 Monday, September 30, 2002 8:11 PM
130
Six Sigma and Beyond: The Implementation Process
must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers. Yet another way may be to present some specific examples of your company in relationship to your competitors.
OVERVIEW
OF THE
BIG PICTURE
Deployment structure: the Six Sigma implementation process must be a topdown flow; otherwise, it will not work. Executive leadership (part-time basis): executives should be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Key roles are: • Establish the vision • Articulate the business strategy • Provide resources • Remove roadblocks • Support the culture change • Monitor the results • Define the criteria for success and make others accountable for the results • Align the systems and structures with the changes taking place • Participate with the black belts through project reviews and recognize results Master black belt (full-time basis): they are the trainers, coaches, and facilitators. They are the experts of Six Sigma tools and methodologies and are responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Key roles are: • Be the expert in tools and concepts • Facilitate and implement Six Sigma in the organization • Certify the Black Belts • Assist in identifying projects • Coach and support black belts • Participate in project reviews • Develop new tools or modify old tools for applications • Lead major programs • Share best practices • Drive passion • Partner with champion Project champions (part-time basis): they drive the Six Sigma through the process and are accountable for the performance of black belts and the
SL316XCh08Frame Page 131 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
131
results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt, and they are supposed to eliminate bottlenecks and conflicts that arise during projects, especially projects with cross-functional responsibilities. Key roles are: • Execute the vision through the organization • Create and maintain passion • Identify and prioritize projects • Identify and select the black belts • Develop the reward and recognition program • Share best practices in the organization • Remove barriers for black belts • Drive and communicate results • Develop a comprehensive training plan • Communicate the linkage between Six Sigma and the business strategy Black belts (full-time basis): they are accountable for driving projects and are responsible for leading and teaching Six Sigma processes within the company. Black belts are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each (projects are commonly worth between $400,000 and $600,000). It is expected that the improvement will be a breakthrough improvement with a magnitude of 100X. Key roles are: • Full time • Identify barriers • Lead project teams • Identify project resources • Be expert of the breakthrough strategy • Teach and coach as needed • Manage project risk • Deliver results on time • Report project status • Complete final report and control plan • Ensure results are sustained Green belts (part time basis): they are expected to help black belts with expediting and completing Six Sigma projects and may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Key roles are: • Apply the methodology in functional areas • Support the black belts in completing projects • Be project team member • Help ensure improvements are sustained • Concurrent with existing responsibilities Process Driven, NOT Event Driven Rollout strategy (emphasize the importance of projects and measurement) Management’s responsibility
SL316XCh08Frame Page 132 Monday, September 30, 2002 8:11 PM
132
Six Sigma and Beyond: The Implementation Process
Training requirements Black belts Green belts Project definition: • Who is my customer? • What matters? What are the CTQs? • What is the scope? • What nonconformance am I trying to reduce? By how much? • Is the goal of reduction realistic? • What is the current cost of poor quality? • What benefits will we get if we improve to the point of reaching our goal? Project selection: define the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to management. To use the CT Matrix follow the seven steps: • Identify the customers • Meet with customers and identify CTSs • Perform CTY then CTX breakdown and construct CT Matrix • Identify critical or leverage processes • Set improvement objectives and develop action plans • Assign agents • Identify CTPs for critical or leverage processes through Six Sigma projects
IDENTIFY CUSTOMER Y = f(X). Y is the output and the Xs are is the inputs. Identify Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have Y = F(X1, X2…, Xn). However, that is not all. We may even have a single X cascading into a further level, such that for every X1 we may have Y = f(x1, x2,…,xn) This is called cascading. Apply project selection checklist. To ensure the selected issue, concern, or problem will make a good Six Sigma project, a checklist can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time?
SL316XCh08Frame Page 133 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
133
• Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Another way to look at the project selection may be to focus on impact, time, tools, metrics, financial, research, and team effort. Typical questions are: • • • • • • • • • • • • • • • • •
What corporate objective is supported by this objective? What business group objective is addressed by the project? What customer will benefit from this project? How? Can the project be completed within 3 to 4 months? Could the process improvements be handled adequately via basic methods and techniques? Is the more structured Six Sigma approach and the methodology required desirable for this project? Will this project require application of all phases of Six Sigma? Have you defined the nonconformances opportunities? Do the baseline nonconformance data exist to support project selection? Is the nonconformance reduction offered greater than 70%? What improvements are expected in your area from the project? Are projected savings greater than or equal to $XXXK per year? Will this project lead to improvements with little or no capital? Is there a similar project already under way or proposed at another location? Can this project be led by a black belt? Can you identify the team members to start this project? Is capital investment required?
Develop high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of the Six Sigma methodology. This is the point where the champion really needs to understand the process because he or she has to “sell it” to management. In other words, he or she has to make the business case for the project.
THE DMAIC PROCESS The model: it is a structured methodology for executing Six Sigma project activities. Make sure to point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and of Six Sigma.
SL316XCh08Frame Page 134 Monday, September 30, 2002 8:11 PM
134
Six Sigma and Beyond: The Implementation Process
Measure: the purpose is to establish techniques for collecting data about current performance that highlight project opportunities and provide a structure for monitoring subsequent improvements. Typical questions are: • What is my process? How does it function? • Which outputs affect CTQs most? • Which inputs seem to affect outputs (CTQs) most? • Is my ability to measure and or detect “good enough?” • How is my process doing today? • How good could my (current) process be when everything is running smoothly? • What is the best that my process was designed to do? Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Typical questions are: • Which inputs actually affect my CTQs most? By how much? • Do combinations of variables affect outputs? • If I change an input, do I really change the output? • If I observe results from the same process and different locations and results appear to be different, are they really? • How many observations do I need to draw conclusions? • What level of confidence do I have regarding my conclusions? • Can I describe the relationship between inputs and outputs in a statistical format? • Do I know the inputs with the biggest impact on a given output? Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Typical questions are: • Once I know for sure which inputs most impact my outputs, how do I set them? • How many trials do I need to run to find and confirm the optimal setting and procedure of these key inputs? • Do I use systematic experimentation to find the input combination that delivers the optimal output? Control: the purpose is to institutionalize process and product improvements and monitor ongoing performance. Typical questions are: • Once I have reduced the nonconformances, how do the functional team and I keep them there? • How does the functional team keep it going? • What do I set up to keep it going even when things like people, technology, and customers change ? Select product or process key characteristics, e.g., customer Y using the improvement strategy — the DMAIC model. Please notice that every output is data-based; therefore, the decision is data-based. Define/measure: • Define performance standards for Y. The focus is Y.
SL316XCh08Frame Page 135 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
135
• Validate measurement system for Y. The focus is Y. • Establish process capability of creating Y. The focus is Y. • Define improvement objectives for Y. The focus is Y. Analyze: • Identify variation sources in Y. The focus is Y. • Screen potential causes for change in Y and identify vital few x1. The focus is on x1, x2,…xn. • Discover variable relationships among vital few x1. The focus is on x1, x2,…xn. Improve: • Establish operating tolerances on vital few x1. The focus is on the vital few x1. • Validate measurement system for x1. The focus is on the vital few x1. • Determine ability to control vital few x1. The focus is on the vital few x1. Control: • Implement process control system on vital few x1. The focus is on the vital few x1.
DETAILED MODEL EXPLANATION Define the organization’s values. Key questions are: • • • • • •
What do we really value? Who are our customers and what do they need? Who are we and what do we do? What does customer satisfaction mean? Do our values correlate with those of our customers? How do we verify that we meet internal and external needs?
PERFORMANCE METRICS REPORTING • The classical view vs. the Six Sigma approach • Understand the difference • Understand the magnitude of this difference
ESTABLISH CUSTOMER FOCUS • • • • • •
What is important to the customer? How do we know? Critical to satisfaction Importance of identifying the CTQs. Understand the difference between functional wants and requirements. Understand the interaction between what customers need and what suppliers do.
SL316XCh08Frame Page 136 Monday, September 30, 2002 8:11 PM
136
Six Sigma and Beyond: The Implementation Process
DEFINE VARIABLES: KEY QUESTIONS ARE • • • •
What is meant by variables? What is a dependent variable? What is an independent variable? What other labels are synonymous with dependent and independent variables? • What is meant by leverage variable? • What strategies can be used to isolate leverage variables?
THE FOCUS
OF
SIX SIGMA
• Y = f(x) • The critical to (CT) concept. Key questions are: • What does critical to satisfaction (CTS) mean in terms of customers? • What does critical to quality (CTQ) mean in terms of a product, service, or transaction? • What does critical to delivery (CTD) mean in terms of a product, service, or transaction? • What does critical to cost (CTC) mean in terms of a product, service, or transaction? • What does critical to process (CTP) mean in terms of a product, service, or transaction? • What is the relationship between defect opportunities and CTs? • The critical to quality (CTQ) and Critical to Process (CTP) • Customer satisfaction
PROCESS OPTIMIZATION PROCESS BASELINE: KEY QUESTIONS ARE • What is a process baseline and how is it different from a product benchmarking? • What is the relationship between a process baseline and a process mapping? • What is the relationship between a process baseline, CTs, and nonconformance opportunities? • Where are the key performance metrics associated with a process baseline? • How should a process baseline be established? • How can a process baseline be improved? Supplier improvement. Supplier capability is a critical piece of breakthrough strategy.
SL316XCh08Frame Page 137 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
137
Measure Process Characterization Understanding the concept of rolled-throughput yield Traditional Y = S/U where Y is yield; S = number of units that pass; and U = number of units tested Definition of nonconformance (defect) Six Sigma definition of yield (yield at every step in the process) Yield without rework Hidden factory (rework, nonvalue activities) First pass yield (Yrt) — no rework Normalizing yield (Ynorm) is the average yield per step of a sequential process, e.g., a process with three steps has a yield of 75%, 80%, and 95% per step yield. The normalized average is: 0.75 × 0.80 × 0.95 = 57% Yrt; Ynorm =
3
.57 = 82.91%.
Rolled throughput yield = P(operation 1) × P(operation 2) ×… P(operation n) = e–dpu True yield = Y =
(d / u)r e − d /u (d / u)0 e − d /u 1(e − d /u ) = = = e− d /u r! 0! 1
where Y = yield; d/u = the defects per unit; e = 2.718. Therefore, when r = 0, we obtain the probability of zero defects or rolled throughput yield. This is very different from the classical determination of yield. Poisson approximation Useful formulas DPU = defects per unit TOP (total opportunities) = Units × opportunities DPO (defects per opportunity) = defects per TOP Probability the opportunity is defective = DPO Probability the opportunity is not defective = Pr(ND) = 1 – DPO Rolled yield is the likelihood that any given unit of product will contain 0 defects (recommended when you know the yield of each process element or opportunity) Yrt = Pr(ND) # of opportunities Yrt = Pr1(ND) × Pr2(ND) ×…Prn(ND) Integration of rework loops is to understand the ramifications of processes that are causing the defects.
PROCESS MAPPING Understanding the visual display of the process
SL316XCh08Frame Page 138 Monday, September 30, 2002 8:11 PM
138
Six Sigma and Beyond: The Implementation Process
Understand the “what you think it is…” Understand the “what it actually is…” Understand the “what you would like it to be…” Differentiate between business process — strategic, business processes — internal, SIPOC model, and detailed subprocesses map.
CAUSE
AND
EFFECT
Understand the function of the cause and effect in a nonmanufacturing environment. Differentiate between manufacturing and nonmanufacturing causes, e.g., manufacturing: manpower — people, machine, method, material, measurement, mother nature — environment. transactional/commercial/ service: manpower — people, policies, procedures, place, measurement, mother nature — environment. Cause and effect matrix — the idea is to identify and evaluate control plans for key process input variables (KPIVs). Step 1. Identify key customer requirements. Step 2. Rank order and assign priority factor to each output. Step 3. Identify all process steps and materials (inputs) from process map. Step 4. Evaluate correlation (a low score can have small effect on output variables; a high score can greatly affect the output variable). Step 5. Cross-multiply correlation values with priority factors and add for each input.
THE APPROACH
TO
C&E MATRIX
Approach 1. a) place the outputs across the top of the matrix, b) rank outputs, c) place inputs down the left side of the matrix starting with first process step and move to last process step. Approach 2. a) place the outputs across the top of the matrix, b) place the process steps down the left side of the matrix, c) correlate process steps to outputs, d) Pareto the process steps, e) start a new C&E matrix with inputs from the most critical three or four process steps.
LINKS
OF
C&E MATRIX
TO
OTHER TOOLS
Capability summary – key outputs are evaluated FMEA – potential problems are identified Control plan – key inputs are evaluated
BASIC STATISTICS The ten basic statistical concepts
SL316XCh08Frame Page 139 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
• • • • • • • • • •
139
Types of data Central tendency Confidence intervals Variation Spread Central limit theorem Distributions Degrees of freedom Probability Accuracy and precision
Process capability – customer requirements; process characterization; process stability Cp, Cpk, Pp, and Ppk — in Six Sigma the focus is first on centering and then controlling the spread Rational subgroupings Sampling Sample Short- vs. long-term capability (performance)
CONVERTING DPM
TO A
Z EQUIVALENT
Understand the Z values Know how to use the Z table Standard transformations Understand the difference between pooled and total standard deviation Pooled — taken over a relatively short time. It takes into account only the variation within a subset and common causes of variation. Total standard deviation — taken from many samples that represent the shift and drift that occur in the population due to all causes of variation. Graphical tools — the most important analysis tool is to ALWAYS plot the data.
BASIC GRAPHS Pareto Time series Standard plot Boxplot Histograms Marginal plots Scatter plots Control charts Other charts Check sheets Cause and effect
SL316XCh08Frame Page 140 Monday, September 30, 2002 8:11 PM
140
Six Sigma and Beyond: The Implementation Process
ANALYZE Process optimization — hypothesis testing Roles of statistics Population vs. samples Why do we need hypothesis testing? Significance vs. importance
IMPROVE Design of experiments (DOE) What is it? Objectives Strategy One, two, multiple factors at a time Interactions Model building
CONTROL Process optimization — process control What is control? Sources of variation Types of variation Statistical process control (SPC)/control charts Attribute Variable Project report outs
SIX SIGMA PROJECT CHAMPION — TECHNICAL TRAINING This training in the implementation process of Six Sigma is intended to familiarize the individuals who are about to facilitate the logistics as well as mediate conflict in the Six Sigma diffusion process with the technical areas of the organization. The technical manager ensures that the appropriate help and resources are available to the master black belts and black belts in pursuing process improvement. To be sure, the material for this training is much more technical (more detailed) than the transactional material, even though it is designed for high-level users. The purpose of this is to ensure that the process and requirements of Six Sigma methodology are understood. A technical champion is not expected to do a project, but he is expected to understand the process and provide support as well as eliminate bottlenecks, especially when multiple departments are involved. In our estimation, the emphasis of this training should be on why as opposed to how. A technical champion must be familiar with the process but also must understand the foundations of the approach in such a way that he can ask the right questions. His understanding
SL316XCh08Frame Page 141 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
141
should be on such a level that if he needs to explain the project to a green belt, he feels comfortable enough that his explanation would pass muster for the executive level as well. The opposite should hold true as well. It is often suggested that simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may include defining a process and coming up with ways to improve that process; defining five to ten operational definitions in that process; working with some variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE set-ups; running an experiment with software; and others. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this champion session. It should last 5 days and be taught by a master black belt or an outside consultant. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series. (Note: some of the material is the same as that of the transactional training.) Introductions Agenda Ground rules Exploring our values Objectives Definition: just as in the transactional training, the technical approach is based on the customer, the opportunity, and the successes. (It must be stressed that we will be applying Six Sigma methodology rather than following the “pack.” Make sure that emphasis is placed on the process and the tools. In the case of the process, participants must recognize that Six Sigma methodology should be followed systematically, as should attempts to reduce nonconformances that are important to the customer. With respect to the tools, participants must recognize that qualitative as well as quantitative techniques may be employed to resolve issues.) In other words: • • • •
Know what is important to the customer Reduce nonconformances Center around the target Reduce variation
SIX SIGMA BREAKTHROUGH GOAL • A solution for improving company value • A business strategy for net income improvement • A means to enhance customer perception
SIX SIGMA GOAL Defect reduction – why is it important to focus on cost of poor quality (COPQ)
SL316XCh08Frame Page 142 Monday, September 30, 2002 8:11 PM
142
Six Sigma and Beyond: The Implementation Process
Yield Improvement Improved customer satisfaction and higher return on investment — the importance of learning faster than our competitors is the only sustainable advantage. This is the reason why Six Sigma methodology emphasizes breakthrough improvements rather than incremental ones.
COMPARISON
BETWEEN
THREE SIGMA
AND
SIX SIGMA QUALITY
SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma: After the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers. Yet another way may be to present some specific examples of your company in relationship to your competitors.
OVERVIEW
OF THE
BIG PICTURE
Deployment structure: the Six Sigma implementation process must be a topdown flow; otherwise, it will not work. Executive leadership (part-time basis): executives should be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Key roles are: • • • • • • •
Establish the vision Articulate the business strategy Provide resources Remove roadblocks Support the culture change Monitor the results Define the criteria for success and make others accountable for the results • Align the systems and structures with the changes taking place • Participate with the black belts through project reviews and recognition of results Master black belt (full-time basis): they are the trainers, coaches, and facilitators. They are the experts of Six Sigma tools and methodologies and are responsible for training and coaching black belts. Master black belts, or
SL316XCh08Frame Page 143 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
143
shoguns as we call them, may also be responsible for leading large projects on their own. Key roles are: • • • • • • • • • • •
Be the expert in tools and concepts Facilitate and implement Six Sigma in the organization Certify the black belts Assist in identifying projects Coach and support black belts Participate in project reviews Develop new tools or modify old tools for applications Lead major programs Share best practices Drive passion Partner with champion
Project champions (part-time basis): they drive the Six Sigma through the process and are accountable for the performance of black belts and the results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt, and they are supposed to eliminate bottlenecks and conflicts that arise during projects, especially projects with cross-functional responsibilities. Key roles are: • • • • • • • • • •
Execute the vision through the organization Create and maintain passion Identify and prioritize projects Identify and select the black belts Develop the reward and recognition program Share best practices in the organization Remove barriers for black belts Drive and communicate results Develop a comprehensive training plan Communicate the linkage between Six Sigma and the business strategy
Black belts (full-time basis): they are accountable for driving projects and are responsible for leading and teaching Six Sigma processes within the company. Black belts are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each (projects are commonly worth between $400,000 and $600,000). It is expected that the improvement will be a breakthrough improvement with a magnitude of 100×. Key roles are: • • • • •
Full time Identify barriers Lead project teams Identify project resources Be expert of the breakthrough strategy
SL316XCh08Frame Page 144 Monday, September 30, 2002 8:11 PM
144
Six Sigma and Beyond: The Implementation Process
• • • • • •
Teach and coach as needed Manage project risk Deliver results on time Report project status Complete final report and control plan Ensure results are sustained
Green belts (part time basis): they are expected to help black belts with expediting and completing Six Sigma projects and may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Key roles are: • • • • •
Apply the methodology in functional areas Support the black belts in completing projects Be project team member Help ensure improvements are sustained Concurrent with existing responsibilities
Process Driven, NOT Event Driven Rollout strategy (emphasize the importance of projects and measurement) Management’s responsibility Training requirements Black belts Green belts Project definition: • • • • • • •
Who is my customer? What matters? What are the CTQs? What is the scope? What nonconformance am I trying to reduce? By how much? Is the goal of reduction realistic? What is the current cost of poor quality? What benefits will we get if we improve to the point of reaching our goal?
Project selection: define the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to management. To use the CT Matrix follow the seven steps: • • • • • •
Identify the customers Meet with customers and identify CTSs Perform CTY then CTX breakdown and construct CT Matrix Identify critical or leverage processes Set improvement objectives and develop action plans Assign agents
SL316XCh08Frame Page 145 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
145
• Identify CTPs for critical or leverage processes through Six Sigma projects
IDENTIFY CUSTOMER Y = f(x). Y is the output and the Xs are the inputs. Identify the Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have Y = F(X1, X2…, Xn). However, that is not all. We may even have a single X cascading into a further level, such that for every X1 we may have Y = f(x1, x2,…xn) This is called cascading. Apply project selection checklist. To ensure the selected issue, concern, or problem will make a good Six Sigma project, a checklist can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time? • Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Another way to look at the project selection may be to focus on impact, time, tools, metrics, financials, research, and team effort. Typical questions are: • • • • •
What corporate objective is supported by this objective? What business group objective is addressed by the project? What customer will benefit from this project? How? Can the project be completed within 3 to 4 months? Could the process improvements be handled adequately via basic methods and techniques? • Is the more structured Six Sigma approach and the methodology required desirable for this project? • Will this project require application of all phases of Six Sigma? • Have you defined the nonconformances opportunities?
SL316XCh08Frame Page 146 Monday, September 30, 2002 8:11 PM
146
Six Sigma and Beyond: The Implementation Process
• Do the baseline nonconformance data exist to support project selection? • Is the nonconformance reduction offered greater than 70%? • What improvements are expected in your area from the project? • Are projected savings greater than or equal to $XXXK per year? • Will this project lead to improvements with little or no capital? • Is there a similar project already under way or proposed at another location? • Can this project be led by a black belt? • Can you identify the team members to start this project? • Is capital investment required? Develop high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of the Six Sigma methodology. This is the point where the champion really needs to understand the process because he or she has to “sell it” to management. In other words, he or she has to make the business case for the project.
THE DMAIC PROCESS The model: it is a structured methodology for executing Six Sigma project activities. Make sure to point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and of Six Sigma. Measure: the purpose is to establish techniques for collecting data about current performance that highlight project opportunities and provide a structure for monitoring subsequent improvements. Typical questions are: • • • • • •
What is my process? How does it function? Which outputs affect CTQs most? Which inputs seem to affect outputs (CTQs) most? Is my ability to measure and or detect “good enough?” How is my process doing today? How good could my (current) process be when everything is running smoothly? • What is the best that my process was designed to do?
Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Typical questions are: • Which inputs actually affect my CTQs most? By how much? • Do combinations of variables affect outputs?
SL316XCh08Frame Page 147 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
147
• If I change an input, do I really change the output? • If I observe results from the same process and different locations and results appear to be different, are they really? • How many observations do I need to draw conclusions? • What level of confidence do I have regarding my conclusions? • Can I describe the relationship between inputs and outputs in a statistical format? • Do I know the inputs with the biggest impact on a given output? Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Typical questions are: • Once I know for sure which inputs most impact my outputs, how do I set them? • How many trials do I need to run to find and confirm the optimal setting and procedure of these key inputs? • Do I use systematic experimentation to find the input combination that delivers the optimal output? Control: the purpose is to institutionalize process and product improvements and monitor ongoing performance. Typical questions are: • Once I have reduced the nonconformances, how do the functional team and I keep them there? • How does the functional team keep it going? • What do I set up to keep it going even when things like people, technology, and customers change ? Select product or process key characteristics, e.g., customer Y using the improvement strategy — the DMAIC model. Please notice that every output is data-based; therefore, the decision is data-based. Define/measure: • • • •
Define performance standards for Y. The focus is Y. Validate measurement system for Y. The focus is Y. Establish process capability of creating Y. The focus is Y. Define improvement objectives for Y. The focus is Y.
Analyze: • Identify variation sources in Y. The focus is Y. • Screen potential causes for change in Y and identify vital few x1. The focus is on x1, x2,…xn. • Discover variable relationships among vital few x1. The focus is on x1, x2,…xn.
SL316XCh08Frame Page 148 Monday, September 30, 2002 8:11 PM
148
Six Sigma and Beyond: The Implementation Process
Improve: • Establish operating tolerances on vital few x1. The focus is on the vital few x1. • Validate measurement system for x1. The focus is on the vital few x1. • Determine ability to control vital few x1. The focus is on the vital few x1. Control: • Implement process control system on vital few x1. The focus is on the vital few x1.
DETAILED MODEL EXPLANATION Define the organization’s values. Key questions are: • • • • • •
What do we really value? Who are our customers and what do they need? Who are we and what do we do? What does customer satisfaction mean? Do our values correlate with those of our customers? How do we verify that we meet internal and external needs?
PERFORMANCE METRICS REPORTING • The classical view vs. the Six Sigma approach • Understand the difference • Understand the magnitude of this difference
ESTABLISH CUSTOMER FOCUS • • • • • •
What is important to the customer? How do we know? Critical to satisfaction. Importance of identifying the CTQs. Understand the difference between functional wants and requirements. Understand the interaction between what customers need and what suppliers do.
DEFINE VARIABLES: KEY QUESTIONS ARE • What is meant by variables? • What is a dependent variable? • What is an independent variable?
SL316XCh08Frame Page 149 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
149
• What other labels are synonymous with dependent and independent variables? • What is meant by leverage variable? • What strategies can be used to isolate leverage variables?
THE FOCUS
OF
SIX SIGMA
• Y = f(X) • The Critical To (CT) concept. Key questions are: • What does critical to satisfaction (CTS) mean in terms of customers? • What does critical to quality (CTQ) mean in terms of a product, service, or transaction? • What does critical to delivery (CTD) mean in terms of a product, service, or transaction? • What does critical to cost (CTC) mean in terms of a product, service, or transaction? • What does critical to process (CTP) mean in terms of a product, service, or transaction? • What is the relationship between defect opportunities and CTs? • The Critical to Quality (CTQ) and Critical to Process (CTP) • Customer satisfaction
PROCESS OPTIMIZATION PROCESS BASELINE Key questions are: • What is a process baseline and how is it different from a product benchmarking? • What is the relationship between a process baseline and a process mapping? • What is the relationship between a process baseline, CTs, and nonconformance opportunities? • Where are the key performance metrics associated with a process baseline? • How should a process baseline be established? • How can a process baseline be improved? Supplier improvement. Supplier capability is a critical piece of breakthrough strategy. Measure Process Characterization Understanding the concept of rolled-throughput yield Traditional Y = S/U where Y is yield; S = number of units that pass; and U = number of units tested Definition of nonconformance (defect)
SL316XCh08Frame Page 150 Monday, September 30, 2002 8:11 PM
150
Six Sigma and Beyond: The Implementation Process
Six Sigma definition of yield (yield at every step in the process) Yield without rework Hidden factory (rework, nonvalue activities) First pass yield (Yrt) — no rework Normalizing yield (Ynorm) is the average yield per step of a sequential process, e.g., a process with three steps has a yield of 75%, 80%, and 95% per step yield. The normalized average is: 0.75 × 0.80 × 0.95 = 57% Yrt; Ynorm =
3
.57 = 82.91%.
Rolled throughput yield = P(operation 1) × P(operation 2) ×… P(operation n) = e–dpu
True yield = Y =
(d / u)r e − d /u (d / u)0 e − d /u 1(e − d /u ) = = = e− d /u r! 0! 1
where Y = yield; d/u = the defects per unit; e = 2.718. Therefore, when r = 0, we obtain the probability of zero defects or rolled throughput yield. This is very different from the classical determination of yield. Poisson approximation Useful formulas DPU = defects per unit TOP (total opportunities) = Units × opportunities DPO (defects per opportunity) = defects per TOP Probability the opportunity is defective = DPO Probability the opportunity is not defective = Pr(ND) = 1 – DPO Rolled yield is the likelihood that any given unit of product will contain 0 defects (recommended when you know the yield of each process element or opportunity) Yrt = Pr(ND) # of opportunities Yrt = Pr1(ND) × Pr2(ND) ×…Prn(ND) Integration of rework loops is to understand the ramifications of processes that are causing the defects.
PROCESS MAPPING Understand the visual display of the process Understand the “what you think it is…” Understand the “what it actually is…” Understand the “what you would like it to be…” Differentiate between business process — strategic, business processes — internal, SIPOC model, and detailed subprocesses map
SL316XCh08Frame Page 151 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
CAUSE
AND
151
EFFECT
Understand the function of the cause and effect in a nonmanufacturing environment. Differentiate between manufacturing and nonmanufacturing causes, e.g., manufacturing: manpower — people, machine, method, material, measurement, mother nature — environment. Transactional/commercial/service: manpower — people, policies, procedures, place, measurement, mother nature — environment. Cause and effect matrix — the idea is to identify and evaluate control plans for key process input variables (KPIVs). Step 1. Identify key customer requirements. Step 2. Rank order and assign priority factor to each output. Step 3. Identify all process steps and materials (inputs) from process map. Step 4. Evaluate correlation (a low score can have small effect on output variables; a high score can greatly affect the output variable). Step 5. Cross-multiply correlation values with priority factors and add for each input.
THE APPROACH
TO
C&E MATRIX
Approach 1. a) place the outputs across the top of the matrix, b) rank outputs, c) place inputs down the left side of the matrix starting with first process step and move to last process step. Approach 2. a) place the outputs across the top of the matrix, b) place the process steps down the left side of the matrix, c) correlate process steps to outputs, d) Pareto the process steps, e) start a new C&E matrix with inputs from the most critical three or four process steps.
LINKS
OF
C&E MATRIX
TO
OTHER TOOLS
Capability summary – key outputs are evaluated FMEA – potential problems are identified Control plan – key inputs are evaluated
BASIC STATISTICS The ten basic statistical concepts • • • •
Types of data Central tendency Confidence intervals Variation
SL316XCh08Frame Page 152 Monday, September 30, 2002 8:11 PM
152
Six Sigma and Beyond: The Implementation Process
• • • • • •
Spread Central limit theorem Distributions Degrees of freedom Probability Accuracy and precision
Process capability – customer requirements; process characterization; process stability Cp, Cpk, Pp, and Ppk — in Six Sigma the focus is first on centering and then controlling the spread Rational subgroupings Sampling Sample Short- vs. long-term capability (performance)
CONVERTING DPM
TO A
Z EQUIVALENT
Understand the Z values Know how to use the Z table Standard transformation(s) Understand the difference between pooled and total standard deviation Pooled — taken over a relatively short time. It takes into account only the variation within a subset and common causes of variation. Total standard deviation — taken from many samples that represent the shift and drift that occur in the population due to all causes of variation. Graphical tools — the most important analysis tool is to ALWAYS plot the data
BASIC GRAPHS Pareto Time series Standard plot Boxplot Histograms Marginal plots Scatter plots Control charts Other charts Check sheets Cause and effect
ANALYZE Process optimization — hypothesis testing Roles of statistics
SL316XCh08Frame Page 153 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
153
Population vs. samples Why do we need hypothesis testing? Significance vs. importance
IMPROVE Design of experiments (DOE) What is it? Objectives Strategy One, two, multiple factors at a time Interactions Model building
CONTROL Process optimization — process control What is control? Sources of variation Types of variation Statistical process control (SPC)/control charts Attribute Variable Project report outs
SIX SIGMA PROJECT CHAMPION TRAINING — MANUFACTURING The following outline serves as an opening discussion to give manufacturing champions a “feel” for Six Sigma methodology. It is given here as a guideline, but one may forego this discussion. If used, it should last no more than 4 hours. It is considered as an introduction because it sets the tone for the training that follows. Most organizations try to implement Six Sigma in manufacturing without having examined some of their own internal situations. This short excursion provides for venting, explanation, motivation, and the need for Six Sigma without really delving into detail in any particular area. Introductions Agenda Ground rules
EXPLORING OUR VALUES SHORT OVERVIEW Problem – potential project
SL316XCh08Frame Page 154 Monday, September 30, 2002 8:11 PM
154
Six Sigma and Beyond: The Implementation Process
What is Six Sigma? The goals of Six Sigma Why focus on COPQ? Knowledge is the foundation Directions of Knowledge What makes Six Sigma different? Leaders asking the right questions Foundation of the tools Collecting data Questions to be answered Drive data collection Statistics – intuition vs. data The changing quality philosophy The cost of poor quality (COPQ) A statistical look Variation is the enemy Consequences of variation Primary sources of variation How do we measure variation and quality? The standard deviation What makes Six Sigma different? Where does industry stand? Getting to Six Sigma The impact of added inspection Impact of complexity on inspection The breakthrough methodology The focus of Six Sigma Improvement strategy (DMAIC) The Six Sigma roadmap DMAIC problem solving and fixing method A picture of a process – what is a process The roadmap Project definition Measure Define Measure Analyze Improve Control Breakthrough strategy Six Sigma breakthrough Six Sigma terms and definitions
SL316XCh08Frame Page 155 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
155
SIX SIGMA MANUFACTURING CHAMPION TRAINING — GETTING STARTED Open the formal training with discussion of key items such as: Ranking our values Comparing value systems Relating behavior and values Measurements get attention Performance metrics reporting The classical view of performance Understanding the differences The magnitude of difference What do we measure today? Establishing customer focus Critical to satisfaction: Identifying CTQs Contrasting views — customers and suppliers Customers speak a different language Maximizing customer value Maximizing interactions Linking customer needs and what we do Defining variables — should be a good discussion of: What is meant by the term variables? What is a dependent variable? What is an independent variable? What other labels are synonymous with dependent and independent variables? What is meant by the phrase leverage variable? What strategies can be used to isolate leverage variables? CT concept CTQ and CTP characteristics Customer satisfaction: Quality Delivery Price The focus of Six Sigma The model of Six Sigma: • Define • Measure • Analyze • Improve • Control The leverage principle
SL316XCh08Frame Page 156 Monday, September 30, 2002 8:11 PM
156
Six Sigma and Beyond: The Implementation Process
Process optimization Six Sigma — key questions: What does the phrase critical to satisfaction mean in terms of a customer? What does the phrase critical to quality mean in terms of a product, service, or transaction? What does the phrase critical to delivery mean in terms of a product, service, or transaction? What does the phrase critical to cost mean in terms of a product, service, or transaction? What does the phrase critical to process mean in terms of a product, service, or transaction? What is the relationship between defect opportunities and CTs? CT matrix components: The CT matrix structure “Critical to” characteristics CTS characteristics The product tree (CTY) Process tree (CTX tree) The nature of opportunities: Nature of an opportunity Opportunity and density Frequently used data types and distributions The opportunity hierarchy Opportunity and defect counting strategy Independence and opportunities Complexity and capability Six Sigma process baselines — begin discussion with the following questions: What is a process baseline and how is it different from a product benchmark? What is the connection between a process baseline and a process map? What is the connection between a process baseline, CTs, and defect opportunities? What are the key performance metrics associated with a process baseline? How should a process baseline be established? How can a process baseline be improved? Where are we on the Six Sigma journey? Macro-level product benchmarking Benchmarking engineering drawings What is process baselining? Identifying key processes Baselining manufacturing processes Baselining transactional processes Rolled-throughput yield: The classical perspective of yield What does our intuition tell us? Several competing notions of yield
SL316XCh08Frame Page 157 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
157
Classical/traditional:looks at quality at the end of the process. First-time yield: yield exclusive of rework Application: used to determine the quality level of individual processes or process steps Rolled-throughput yield: probability of zero defects (100% yield) Application: used to estimate the cumulative quality level of a multistep sequential process with statistically independent process steps Normalized yield: average yield of consecutive processes Application: used to estimate the average quality level of an entire process Measuring first pass yield Rolled-throughput yield Calculating normalized yield The hidden factory and rolled yield Yield calculation example Notes on the Poisson approximation – when and how to use The effect of independence Understanding the hidden factory — a simulation exercise may be appropriate here Basic statistics probability distributions Statistics: The most important analysis tool Dot diagram Histograms Measures of Location Mean: arithmetic average of a set of values Reflects the influence of all values Strongly influenced by extreme values Would you prefer your income to be the mean or the median? Median: reflects the 50% rank — the center number after a set of numbers has been sorted from low to high Does not include all values in calculation Is “robust” to extreme outlier scores Why would we use the mean instead of the median? In process improvement? Sample mean for a distribution Sample median Relationship of the mean and median Set of data Measures of spread Measures of variation Standard deviation Deviation is the distance from the mean Deviation score = observation – true mean
SL316XCh08Frame Page 158 Monday, September 30, 2002 8:11 PM
158
Six Sigma and Beyond: The Implementation Process
Variance = mean or average of squared deviation scores σ2 is the symbol for variance Standard deviation = square root of variance σ is the symbol for the standard deviation Population vs. sample Degrees of freedom Sample statistics Additive property of variances — the variance for a sum or difference of two independent variables is found by adding both variances. (Special note: If y1 and y2 are not independent the covariance term must be included.) V(y1 + y2) = V(y1) + V(y2) V(y1 – y2) = V(y1) + V(y2) Accuracy and precision: Accuracy describes centering Precision describes spread Standard deviation as it relates to specifications DPM Real world defect per million data Probability: Probability density function Normal distribution Normal probability plots Standardized Z transformation The empirical rule of the standard deviation as it relates to normal and other distributions: Rule 1 Roughly 60–75% of the data are within a distance of one standard deviation on either side of the mean. Rule 2 Usually 90–98% of the data are within a distance of two standard deviations on either side of the mean. Rule 3 Approximately 99% of the data are within a distance of three standard deviations on either side of the mean. Central limit theorem — definition The sampling distribution of the mean Types of data — attribute or variable? Attribute data (qualitative) • Categories • Yes, no • Go, no go • Machine 1, machine 2, machine 3
SL316XCh08Frame Page 159 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
159
• Pass/fail • Good/defective • Maintenance equipment failures, fiber breakouts, number of seeds, number of defects Variable data (quantitative) • Continuous data — decimal places show absolute distance between numbers e.g., time, pressure, alignment, diameter • Discrete data — data are not capable of being meaningfully subdivided into more precise increments Distribution: Binomial distribution: the binomial distribution is used where there are only two possible outcomes for each trial — repeated trials e.g., good/bad, defective/not defective, success/failure Parameters n = number of trials p = probability of success (0 < p <1) Assumptions 1. The probability of a success is the same for each trial. 2. There are n trials, where n is constant. 3. The n trials are independent. Mean of the binomial distribution m = n*p Variance of the binomial distribution s2 = n*p*(1 – p) Defect The binomial distribution table Poisson distribution Poisson example Introduction to process mapping Definition: process mapping Versions of a process Next steps Common symbols Basic structure Levels of a process Subprocess mapping techniques SIPOC Process flowchart Alternate path method Deployment or cross-functional map/flowchart Cause and effect What is a cause and effect diagram? A visual tool to identify, explore, and graphically display, in increasing detail, all the possible causes related to a problem or condition to discover its root causes. Focuses team on the content of the problem
SL316XCh08Frame Page 160 Monday, September 30, 2002 8:11 PM
160
Six Sigma and Beyond: The Implementation Process
Creates a snapshot of the collective knowledge of team Creates consensus of the causes of a problem Builds support for resulting solutions Focuses the team on causes not symptoms Why use cause and effect diagrams? To discover the most probable causes for further analysis To visualize possible relationships between causes for any problem, current or future To pinpoint conditions causing customer complaints, process errors, or nonconforming products To provide focus for discussion To aid in development of technical or other standards or process improvements Construction of a cause and effect diagram Cause and effect matrix: Cause and effect matrix steps Linking the C&E matrix to other tools Use for input for: FMEA Capability analysis Control plans Process capability and performance Process capability studies Process capability studies answer the following questions: How are we doing? How well could we be doing? What can we expect tomorrow, next week…? Are the process improvements making a difference? Which supplier is giving us the best quality? Are our customers’ expectations being met? Objectives The three aspects of process capability Which process is the best? Why? Process characterization Converting attribute DPM to a Z equivalent Requirements may change! Roadmap to process capability: Step # 1 Define the customer-based specifications Step #2 Characterize the process For a Normal Distribution use σ and z For attribute data use DPMO Distributions: How do we determine capability? The Z transformation
SL316XCh08Frame Page 161 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
161
Six Sigma performance indicators: Map to process capability indicators Process stability Process control Process capability So how do we know if a process is stable? Standard deviation Which standard deviation should we use? Another way of thinking about it... Pooled vs. overall standard deviation Data collection guidelines: for more details see volume 5 of this series: Data gathering guidelines for determining “actual” process performance over the long term Determine subgroup sample size Determine rational subgrouping — specific to each process or project. Things to consider: heads on a multihead machine, different distribution centers, different marketing channels, different regions. Determine study duration Long enough to experience all the causes of variation Include different lots of raw material, different operators, different temperatures, different months, different times of the month Typically 100 – 200 data points: Data collection guidelines Determine subgroup sample size (more on this in the section on sample size determination) Determine rational sub-grouping • Specific to each process/project. Things to consider: heads on a multihead machine, different distribution centers, different marketing channels, different regions Determine study duration • Short enough to eliminate all the special causes of variation • Include one lot of “good” raw material, single operator, consider same day, same shift • Process running like it was intended to run • Typically 30 — 50 data points How much shift should we expect? (Introduction of a software package is appropriate here.) Short- vs. long-term capability Multivariate charts Improvement Initial study of the effects of controllable inputs, noise, and material input variables on the output variables. Major focus: to study the uncontrollable noise variables first! Variation in the noise variables produces chronic and acute mean shifts and changes in variability that lead to process instability.
SL316XCh08Frame Page 162 Monday, September 30, 2002 8:11 PM
162
Six Sigma and Beyond: The Implementation Process
Introduction to hypothesis testing The role of statistics — past vs. future: Why do we need hypothesis testing? Population parameters and sample statistics When do we need hypothesis testing? Tests of significance Hypothesis Hypothesis testing — types of errors Interpreting hypothesis tests: two ways Hypothesis testing road map Commonly used hypothesis tests Hypothesis testing definitions Null hypothesis (Ho) — statement of no change or difference. This statement is assumed true until sufficient evidence is presented to reject it. Alternative hypothesis (Ha) — statement of change or difference. This statement is considered true if Ho is rejected. Type I Error — the error in rejecting Ho when it is in fact true, or in saying there is a difference when, in fact, there is no difference. Alpha risk — the maximum risk or probability of making a Type I Error. This probability is always greater than zero and is usually established at 5%. The researcher makes the decisions to the greatest level of risk that is acceptable for a rejection of Ho. Significance level — same as alpha risk. Type II error — the error in failing to reject Ho when it is in fact false, or in saying there is no difference when there really is a difference. Beta risk — the risk or probability of making a Type II Error, or overlooking an effective treatment or solution to the problem. Design of experiments: What is a designed experiment? Define the ideal function Define the transfer function Objectives of a designed experiment: Strategy of experimentation The intuitive approach — one factor at a time Interactions Model building Process control: Why control? (Here you may want to introduce the quotes in reference to control and understanding of M. Harry (1997, p. 1.4). They are: • We don’t know what we don’t know. • If we can’t express what we know in the form of numbers, we really don’t know much about it. • If we don’t know much about it, we can’t control it. • If we can’t control it, we are at the mercy of chance.
SL316XCh08Frame Page 163 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
163
Process control focuses on the following: Who is responsible for the day-to-day maintenance of the control plan? What is this individual or group of individuals responsible for correcting? Who is to be contacted if there is a problem? What infrastructure is in place or needed to support the control plan? • Training • Documentation • Data collection method…? Who is responsible for auditing the control plan to: Assure compliance Update due to changes in capability, requirements, etc. Share best practices? Quality system structure Sources of variation: Types of variation Common cause Special cause SPC/control charts Team charter Six Sigma breakthrough strategy deployment: Six Sigma roles Deployment process — business leadership attends a one or two-day executive session. Identify and train business leadership (champions, master black belt). Leverage — an individual within the organization has the skills and capabilities and is certified as master black belt (MBB) Emerge — Black Belt is identified and certified as MBB Deployment process — identify improvement projects — champions, with MBBs and aligned with the business strategy, will identify improvement projects in the business and assign black belts for implementation Deploy black belts — 1 week per month, with 3 weeks in between for application Total COPQ Six Sigma breakthrough payback model • Black belt delivers $200,000+ per project (net income line) • Black belt does four to five projects per year • Black belt delivers $1 million bottomline impact per year Roles of an executive Roles of a champion Roles of a master black belt Be the expert in tools and concepts Develop and deliver training to various levels of the organization Certify black belts Assist in the identification of projects Coach and support BBs in project work
SL316XCh08Frame Page 164 Monday, September 30, 2002 8:11 PM
164
Six Sigma and Beyond: The Implementation Process
Participate in project reviews to offer technical expertise Partner with the champions Demonstrate passion around Six Sigma Share best practices Take on leadership of major programs Develop new tools or modify old tools for application Understand the connection between Six Sigma and the business strategy Roles of a black belt Be knowledgeable of the breakthrough strategy application Prepare initial project assessment to validate benefits Lead and direct the team to execute projects Determine most effective tools to apply Show the data Identify barriers Identify project resources Get input from knowledgeable functional experts/team leaders/ coaches Report progress to appropriate leadership levels Present the final report Deliver results on time Solicit help from champions when needed Influence without direct authority Be a breakthrough strategy enthusiast Stimulate champion thinking Teach and coach breakthrough strategy methods and tools Manage project risk Insure the results are sustained Document learning Prerequisites Has process/product knowledge Is willing and able to learn mathematical concepts Knows the organization Is a skillful communicator Is a motivated self-starter Is open-minded Is eager to learn new ideas Has a desire to drive change Possesses project leadership skills Is a team player Is respected by others Has track record on results Certification requirements for black belts Black belt training Project completion
SL316XCh08Frame Page 165 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
165
Final report Sustain the gain Master black belt approval For a successful black belt project Lots of scorekeeping Institutionalizing Six Sigma Key to success — FOCUS, FOCUS, FOCUS!!! Attitude!!! Project selection — what makes a good black belt project? Six Sigma projects The nature of a Six Sigma project Project selection approaches Top-down approach to project selection Bottom-up approach to project selection Project selection using the CT matrix Optimization of core processes: Project authorization Reducing defects — the Six Sigma methodology uses a systematic approach to reducing defects that affect the customer. Each project definition must also clearly specify the defects that will be reduced. Typical questions to be asked are in the areas of: Project investment Project consideration What is commercial quality? Defect elimination Benefit Project report out
TIPS ON SUCCESS FOR SIX SIGMA MANUFACTURING CHAMPION • Six Sigma depends on leadership Use report outs as means of recognition Leverage BBs as a training resource Complete control plans in a timely fashion! Perform financial validation and functional sign-off Track results to the bottom line Cost avoidance vs. bottom-line savings Rewards and recognition DFSS lagging DMAIC Builds credibility Delayed benefits Builds on BB training Green belts are projects clustered around black belts Champions ensure that BBs focus on the methodology Functional ownership is a result of functional participation and buy-in
SL316XCh08Frame Page 166 Monday, September 30, 2002 8:11 PM
166
Six Sigma and Beyond: The Implementation Process
CHAMPION ISSUES 1. Managing change — a challenge of leadership Outcomes of Leading Change: Demonstrated management commitment to making change happen Visible, active, and public support for accelerating change Willingness to take personal initiative and support others’ initiatives in changing status quo Risk-taking, self-confident, and empowered behavior exhibited by individuals at multiple levels in the organization Conflicts and paradoxes inherent in change are identified and resolved Questions to ask to assess leading change: To what extent do our leaders seek and support process innovations that improve productivity? To what extent do our leaders clarify roles and responsibilities for accomplishing change? To what extent do our leaders vigorously question the status quo? To what extent do our leaders lead by example? To what extent do our leaders find opportunities in change rather than excuses for avoiding change? To what extent do our leaders pay attention (focus time, have passion) to change? To what extent do our leaders demonstrate personal competencies of a change advocate? To what extent do our leaders assign critical roles for change? Actions to lead change: Master the processes for accelerating change Manage time, energy, and focus Demonstrate personal leadership competencies required for change Articulate roles of change sponsor, agent, and target (Special note: one must be very careful here because sometimes leaders may say one thing but do the opposite. In fact, two possibilities for failure exist: a) leaders fail to engage in behaviors necessary for change and b) leaders are transferred too quickly before change has occurred.) 2. When or how is the need created? Shared belief among key players that there is a need and logic for change critical to the company over time, that the need for change is greater than the resistance to change, and that people are ready and anxious for change Dissatisfaction with the status quo Ability to separate the symptoms from the underlying problem Questions to ask to assess creating a need: How well do we currently perform on the issue we want to change? In the eyes of the customers? In the eyes of the employees?
SL316XCh08Frame Page 167 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
3.
• •
4.
167
How critical is improved performance on this issue for business results? Because of the threat? Because of the opportunity? Short term or long term? How widely shared is the need for change? To what extent is the need for change greater than the resistance to change? Shaping a vision The process of shaping a vision must start with a clear statement about the outcome of the change effort and must be articulated in both emotive (visual, enticing) and pragmatic (numerical) ways. A simple way to shape a vision is to create a picture of an improved state that is: Customer focused Challenging Easy to understand • Not just one person’s dream but indicative of the team’s commitment • Not fixed or static but evolving • Actionable Questions to ask to assess shaping a vision: To what extent has a vision been clearly articulated? What’s in it for the customers? What’s in it for the employees? To what extent is the vision simple and understandable? To what extent is the vision shared and known in the business? To what extent is the vision motivating and energizing? Actions to shaping a vision: See the world from the customer’s point of view (What would the customer like more/less of?) Articulate a vision that others can readily embrace Create a bold and clear sense of purpose that energizes others Create enthusiastic support for business objections Common failures of vision: No single statement of a vision; everyone has her own version No buy-in that this is the direction we want to move; everyone does not support the vision in private talks No continuity; the vision changes too often No connection with customers; the vision focuses too much on what we want, not what customer wants No simplicity of vision; the vision is too complex to be easily understood and translated to practice Mobilizing commitment • A coalition/network of relevant and committed individuals who are buying into the change effort and visibly supporting it • An ability to manage conflicts inherent in change and engage in appropriate problem solving • An extended commitment to change throughout the organization
SL316XCh08Frame Page 168 Monday, September 30, 2002 8:11 PM
168
Six Sigma and Beyond: The Implementation Process
Questions to ask to assess mobilizing commitment: To what extent have we identified the key individuals (inside and outside our organization) who must support and be involved with this change for it to be successful? To what extent do we have extended buy-in (among employees, customers, and suppliers) for the change to happen? To what extent have champions/sponsors been identified? Are they at the appropriate level with proper authority and responsibility to carry the mandate of change? Actions to mobilizing commitment: Form a coalition of key players who will be change advocates, sponsors, and agents Leverage sponsors to form a network of support Determine who will resist and the causes of resistance, so that resistance can be overcome Recognize ways of dealing with conflict to build commitment Engage in appropriate problem-solving activity Common failures of mobilizing commitment: No political sensitivity in change Glory of the success is unshared Assume that a technical solution is sufficient (e.g., I have the right answer, why isn’t everyone else smart enough to see it) Not enough involvement and sharing responsibility Depends on conflict resolution style all the time Outcomes of making change last: the focus here is to have actions that occur and changes that are initiated with high visibility, and they are immediate, credible, integrated, and lasting over time. Questions to assess making change last: How has the change effort been integrated into other business initiatives? Are people willing to act without full plans and information? How effectively do we transfer learning across boundaries? Are the necessary resources made available? Actions for making change last: Act in ways that demonstrate public commitment to the change Transfer learning from one site to another; run lots of small experiments Assign responsibility for making the change last Leverage symbols, language, and culture to support the change Encourage participate empowered leadership throughout the organization Integrate any one change to overall business process Drive results through change Common failures in the process for making change last: Not institutionalizing the change process; seen as assignment Apathy as energy shifts to other projects “We have already done it” syndrome Trying to do all things at once and not making progress on any
SL316XCh08Frame Page 169 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
169
Waiting for the “perfect” solution before acting Resources required are not available Not sure where to start — too much needs to be done at once Focusing on activities, not results We are unique, so it doesn’t apply 5. Changing systems and structures 1. Staffing: acquiring/placing talent 2. Development: building competence/capability 3. Measures and rewards 4. Effective communication 5. Designing organizations (structure) 6. Information systems (technology, MIS) 7. Measurement systems 8. Resource allocation systems (e.g., budget, finance, strategy)
PROJECT REPORT OUT As has already been mentioned many times, the project is the lifeblood of any Six Sigma initiative. As a consequence, the progress of that project must be communicated not only horizontally but vertically throughout the entire organization. The process for doing that is the “project report out.” The intent of the project report-out is to sustain visibility of the Six Sigma initiative. The objectives and the appropriate participants of the process are as follows: Objectives
Participants
Review progress Ensure methodology is followed Ensure correct use of tools Share best practices Recognition Build confidence of the BBs
Champions, functional leaders Champions, MBBs, BBs Champions, MBBs, BBs Functional leaders, champions, MBBs, BBs Executives and functional leaders Executives and functional leaders
Overall awareness of expectations of the DMAIC model for the Champion training The champion is a very pivotal person in the organization who can either make or break the success of the Six Sigma initiative, whether that initiative is in the transactional, technical, or manufacturing category. It is essential, then, for any champion to be aware of the critical issues on a per-phase basis and for that champion to be able to ask the right questions. The following information is provided as a summary guide for the champion to fulfill that responsibility. Define Phase Purpose: To identify the customers and their CTQs — critical to quality To define the project scope and team charter To map the process to be improved
SL316XCh08Frame Page 170 Monday, September 30, 2002 8:11 PM
170
Six Sigma and Beyond: The Implementation Process
Questions to be answered: • Who is my customer and what is important to them (CTQ)? • What is the scope of the project? What is the problem being addressed? What defect am I trying to reduce? What data have been collected to understand the customer requirements? What are the boundaries of this project? How clearly are the team roles and goals understood and accepted? Are the key milestones and timelines established? • Where do we currently take measurements? • When, where and to what extent does the problem occur? What is my process? How does it function? How was the process map validated? Are multiple versions necessary to account for different types of inputs? • Why are you focusing on this project? What is the current cost of defects (poor quality)? What are the business reasons for completing this project? Are they compelling to the team? Are they compelling to the key stakeholders? • How will you know if the team is successful? What is the goal of this project? Is the goal achievable? Checklist: Customers identified Data to verify customers needs collected VOC — voice of the customer Team charter Problem statement Goal statement Project scope Timeline Financial benefits High level process map — “as is” Tools: • Process mapping — SIPOC • Project scope contract • CT Matrix • Gantt chart • Change management Measure Phase Purpose: • To develop process measures — dependent variables — Ys that will enable you to evaluate the performance of the process • To determine the current process performance and entitlement and assess it against the required performance • To identify the input variables that cause variation in process performance — Y
SL316XCh08Frame Page 171 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
171
Questions to be answered: • Who are the suppliers to the process? • What are the process and output measures that are critical to understanding the performance of this process? What are the performance standards for Y? What is the link to the CTQ? What are the defects for this project? What are the primary sources of variability for this process? Are they control or noise variables? What are the SOPs associated with each control variable? • Where will you collect data? What is your data collection plan? How much data did you collect? Is your ability to measure/detect “good enough?” • When did you sample? How did you ensure you eliminated the influences of assignable causes within your rational subgroups? How did you ensure that you included all the sources of variation between your rational subgroups? • Why is the project being addressed? Have you created a shared need? • How is the process performing? What is the current process sigma level for this project? What is the best that the process was designed to do? What are the defect reduction goals for this project? Have you found any “quick-hit” improvements? Checklist: • Key measurements identified • Rolled throughput yield calculated • Defects identified • Data collection plan completed • Measurement capability study (GR&R) completed • Baseline measures of process capability determined • Defect reduction goals established Tools: Process mapping Cause and effect/fishbone, cause and effect matrix FMEA Gage R&R Graphical techniques (run chart, control chart, histogram etc.) Change management Analyze Phase Purpose: • To prioritize the input variables that cause variation in process performance — Y. • To analyze the data to determine root causes and opportunities for improvement • To validate the key process input variables with data Questions to be answered: • Who is the process owner?
SL316XCh08Frame Page 172 Monday, September 30, 2002 8:11 PM
172
Six Sigma and Beyond: The Implementation Process
• What are all the key process input variables? Have you found any “quick-hit” improvements? What resistance have you experienced or do you anticipate? • Where were data collected on the inputs? • When you realize the opportunities represented by addressing the problem, what are the quantifiable benefits over your current process performance (COPQ)? • Why does the output of your process vary? What are the inputs that matter most? • How have you analyzed the data to identify the vital few factors that account for variation in the process? How were the KPIVs from your C&E diagram verified? What are the root causes of the problem? Checklist: • Detailed “as is” process map completed • All sources of variation identified and prioritization initiated • SOPs reviewed/revisited • Data used and displayed to identify and verify the “vital few” factors (KPIVs) • Problem statement refined reflecting the increased understanding of the problem • Estimates of the quantifiable opportunity represented by the problem Tools: Process map Graphical techniques (run chart, control chart, histogram, Pareto, scatter diagram, etc.) Multivariate studies Hypothesis testing Correlation and regression Change management Improve Phase Purpose: • To generate and validate improvements by setting the input variables to achieve the optimum output. • To determine Y = f(x...) Questions to be answered: • Who is impacted by the change? How are they impacted? What day-to-day behaviors will need to change? • What criteria did you use to evaluate potential solutions? What things have been considered to manage the cultural aspects of the change? What has been done or will be done to mobilize support and deal with resistance? What changes need to be made to rewards, training, structure, measurements, etc. to sustain the change? • Where was the solution validated? • When will the solution be implemented? What is the implementation and communication plan?
SL316XCh08Frame Page 173 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
173
• Why was this solution chosen? What are the potential problems with the plan? • How was an experiment or simulation conducted to ensure the optimum solution was found? How does the solution address the root cause? Checklist: • Solutions to the problem are generated, and the best one that addresses the root cause is selected. “Should be” process map developed • Key behaviors required by new process identified • Cost-benefit analysis of proposed solution completed • Solution validated • Implementation plan developed • Communication plan established Tools: • Process map • Design of experiments • Simulation • Optimization • Change management Control Phase Purpose: • To institutionalize the improvement and implement ongoing control • To sustain the gains Questions to be answered: • Who maintains the control plan? How will responsibility for continued monitoring and improvement be transferred from the team to the owner? • What controls are in place to ensure that the problem does not recur? • Where are the data being collected? What control charts are being used? What evidence is there that the process is in control? • When will the data be reviewed? When will the final report be completed? • Why is the control plan effective? • How has job training been affected? What are the biggest threats to making this change last? What next? • Who is looking for translation opportunities? Direct, customization, adaptation • What is the next problem that should be addressed with regard to this overall process? • Where are some other areas of the business that can benefit from your learning? • When will the learning be shared with the other business areas? • Why is it likely to succeed?
SL316XCh08Frame Page 174 Monday, September 30, 2002 8:11 PM
174
Six Sigma and Beyond: The Implementation Process
• How will the translation opportunities be communicated ? • What did you as a team learn about the process of making Six Sigma improvements? Checklist: • Control plan completed • Evidence that the process is in control • Documentation of the project • Translation opportunities identified • Systems and structures changes made to institutionalize the improvement • Audit plan completed Tools: • Control plans • Statistical process control • Gage control plan • Graphical techniques • Poka yoke/mistake proofing • Preventive maintenance • Change management
PROJECT PRESENTATION MILESTONE REQUIREMENTS — WEEK 1 TRAINING The project is the driving engine of the Six Sigma methodology. It is the essence of improvement and customer satisfaction. Therefore, the champion should be aware of the stages that the BB has to be at on a weekly basis, so that progress is achieved and reported appropriately. The following milestones (weekly basis) are given here to sensitize the champion to that progress. The reader will notice that these are actions of the BB; however, unless the champion understands these actions and the process and the outcomes of each week’s training, it is very unreasonable to expect the champion to come up with decision-making questions, guidance, or any type of resolution.
PRESENTATION GOALS • Identity scope of project. • Share key learning and determine if there are any opportunities to leverage between black belts, sites, or businesses. • Illustrate first-pass use and the value of Six Sigma tools (mapping, Pareto, COPQ). • Receive feedback on presentation content and approach. • Enhance individual ability and encourage the use of computer tools (paperless presentations, PowerPoint, Word, iGrafx, etc.). • Build a project presentation file through the course of training that can be used for business management and local site reviews.
SL316XCh08Frame Page 175 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
175
PRESENTATION NOTES • • • • • • • •
Plan for a 7- to 10-minute presentation and 3 to 5 minutes for feedback. Presentation should be about seven to ten slides, preferably in PowerPoint. Start with team structure, problem statement, and defect definition. Must include a Pareto and process map. Must include first-pass COPQ analysis. Include illustrations of use of measure tools. Share key learning. Identify next steps.
PROJECT PRESENTATION — WEEK 2 PRESENTATION GOALS • Track project progress. • Bring champions, instructor, and peers up to date on projects. • Share key learning and determine if there are any opportunities to leverage between black belts, sites, or businesses. • Illustrate the use and value of Six Sigma tools (mapping, FMEA, Pareto, etc.). • Receive feedback on presentation content and approach. • Enhance individual ability and encourage the use of computer tools (paperless presentations, PowerPoint, Word, iGrafx, etc.). • Build a project presentation file through the course of training that can be used for business management and local site reviews.
PRESENTATION NOTES • • • • • • • • •
Plan for an 8- to 12-minute presentation and 5 minutes for feedback. Presentation should be about 10 to 12 slides, preferably in PowerPoint. Presentation disk needs to be supplied to instructor by end of day Tuesday. Start with team structure, problem statement, and detect definition. Must have a Pareto, process map, and C&E analysis. Must include a COPQ analysis. Include illustrations of use of measure tools. Share key learning. Identify next steps.
PROJECT PRESENTATION – WEEK 3 PRESENTATION GOALS • Track project progress. • Bring champions, instructor, and peers up to date on projects.
SL316XCh08Frame Page 176 Monday, September 30, 2002 8:11 PM
176
Six Sigma and Beyond: The Implementation Process
• Share key learning and determine if there are any opportunities to leverage between black belts, sites, or businesses. • Illustrate the use and value of Six Sigma tools (mapping, FMEA, Pareto, tests of significance, etc.). • Receive feedback on presentation content and approach. • Enhance individual ability and encourage the use of computer tools (paperless presentations, PowerPoint, Word, iGrafx, etc.). • Build a project presentation file through the course of training that can be used for business management and local site reviews.
PRESENTATION NOTES • Plan for an 8- to 12-minute presentation and 5 minutes for questions and feedback. • Presentation should be about 10 to 12 slides in PowerPoint. • Presentation disk needs to be supplied to instructor by end of day Tuesday. • Start with problem statement and defect definition. • Must have a Pareto analysis and COPQ. • Include illustrations of use of one measure tool and two to three analyze tools. • Share key learning. • Identify next steps.
PROJECT PRESENTATION – WEEK 4 PRESENTATION GOALS • Track project progress. • Bring champions, instructor, and peers up to date on projects. • Share key learning and determine if there are any opportunities to leverage between black belts, sites, or businesses. • Illustrate the use and value of Six Sigma tools (measure, analyze, and improve). • Receive feedback on presentation content and approach. • Enhance individual ability and encourage the use of computer tools (paperless presentations, PowerPoint, Word, iGrafx, etc.). • Build a project presentation file through the course of training that can be used for business management and local site reviews.
PRESENTATION NOTES • Plan for a 15- to 20-minute presentation including questions. • Presentation should be about 10 to 15 slides in PowerPoint; disk needs to be given to instructor. • Start with team structure, problem statement, and defect definition.
SL316XCh08Frame Page 177 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
177
• Must have a Pareto, high-level process map, C&E analysis, KPIVs identified, and improvement strategy. • Must include a solid COPQ analysis. • Include illustrations of use of DMAIC tools. • Share key learning. • Identify next steps for sustaining the gain.
TYPICAL CHAMPION’S QUESTIONS FOR THE PROJECT REVIEW IN
THE
DEFINE PHASE
These are some questions champions must answer before launching a project: Have you... • • • • • • • •
Identified your top level business and functional issues and priorities? Identified your key customers and their critical requirements? Identified your key processes and key deliverables? Done a high level process map? Baselined these processes with existing data? Identified areas where you need to collect data? Identified your priority processes for improvement? Identified potential opportunities and improvement projects?
For each individual project, have you: • Formulated a problem and goal statement? • Scoped the project by identifying the process boundaries: supplier inputs, customer outputs, and their requirements, constraints, and resource requirements. • Developed the business case for the project? • Selected the key players associated with the project? • Prepared a project plan? (Reminder: reviews) • Summarized the results of these steps in your project charter? • Entered the project information in Sigma Track scorecard including the project authorization form?
IN
THE
MEASURE PHASE
Typical questions at this phase should be: • Have you updated your project information in Sigma Track? Review it. • What type of data is available and how has it been collected? • Has a measurement error study been undertaken? What are the results? Are better gauges required? If so, what would be the cost? • Have you completed a process map (flowchart)? Who was involved in its development?
SL316XCh08Frame Page 178 Monday, September 30, 2002 8:11 PM
178
Six Sigma and Beyond: The Implementation Process
• What are the categories of defects as shown in your Pareto analysis? • If a technology problem is indicated, what do you think it will take to improve it? Are there any other alternatives? • What is the current defect level (PPM/DPMO), and what improvement target can we set? • What is the capability of the process and the probability of defects? • What are your next steps? • Are you satisfied with the level of cooperation and support you are getting?
IN
THE
ANALYZE PHASE
Typical questions in this phase should be: • Have you updated your project information in Sigma Track? Review it. • How many significant (vital few) variables influence the process? What are they? What sources of variation have been identified? • What is the potential contribution of each of the vital few variables? • What progress has been made on the PPM/DPMO chart (projections and timing)? • What tools have you used in this phase? How were they helpful? • What interim actions have you taken to contain defects until a final solution can be developed and implemented? Has an FMEA been completed? • What are your improvement plans and next steps to get there (including timing, responsibility, and expected results)? • What was the basis for the improvement quantification calculations? • Are you satisfied with the level of cooperation and support you are getting? • What other support actions or activities do you need to accelerate your progress?
IN
THE IMPROVE
PHASE
Typical questions in this phase should be: • Have you updated your project information in Sigma Track? Review it. • What progress has been achieved to date in PPM/DPMO performance? Has your chart been updated? • Is a DOE planned? What support (time, production runs, people, etc.) is necessary for it? • What new tools have you used in this phase? How were they helpful? • What are the possible root causes of defects? Are these included in an updated FMEA? • What product or process design changes are required to achieve your improvement goals? • What are your next steps toward achieving your improvement targets?
SL316XCh08Frame Page 179 Monday, September 30, 2002 8:11 PM
Six Sigma for Champions
179
• Has finance been involved in the project to fully understand the cost implications of your improvement plans? • Are you satisfied with the level of cooperation and support you are getting? • What other support actions or activities do you need to accelerate your progress?
IN
THE
CONTROL PHASE
Typical questions in this phase should be: • Have you updated your project information in Sigma Track? Review it. • What process controls are being implemented to ensure we “sustain the gains”? • What progress has been achieved to date in PPM/DPMO performance? Has your chart been updated? • Who should take responsibility for maintaining the process after your team has completed its project? Are these people fully aware of this and have they agreed? • Is there a plan to revisit this process in the future to ensure the new capability level is maintained? What new measurements are in place? • What is the expected improvement in terms of cost reduction? Has finance been involved in the project to fully understand any cost implications? • What new tools have you learned that were used in this phase of the project? • Are you satisfied with the level of cooperation and support you received during the project? What should we do differently to better support the next project? • When do you plan to have your final report completed? • What lessons have been learned on this project and what opportunities exist for leveraging the improvements? • Do you have any ideas for follow-up projects?
REFERENCE Harry, M. J. (1997). The vision of Six Sigma: Tools and methods for breakthrough. 5th ed. Volume 1. Tri Star Publishing. Phoenix.
SELECTED BIBLIOGRAPHY Breyfogle, F. W. (1999). Implementing Six Sigma: Smarter solutions using statistical methods. John Wiley & Sons. New York. Harry, M. J. (1997). The vision of Six Sigma: Tools and methods for breakthrough. 5th ed. Volumes 2 and 3. Tri Star Publishing. Phoenix.
SL316XCh08Frame Page 180 Monday, September 30, 2002 8:11 PM
180
Six Sigma and Beyond: The Implementation Process
Harry, M. J. (1997). The vision of Six Sigma: Application resource. 5th ed. Volumes 1–3. Tri Star Publishing. Phoenix. Harry, M. J. (1997). The vision of Six Sigma: A roadmap for breakthrough. 5th ed. Volumes 1–3. Tri Star Publishing. Phoenix. Pyzdek, T. (2001). The Six Sigma handbook: A complete guide for Green Belts, Black Belts and managers at all levels. McGraw-Hill. New York.
SL316XCh09Frame Page 181 Monday, September 30, 2002 8:10 PM
9
Six Sigma for Master Black Belts
There is no specific training for this classification in the Six Sigma methodology. It is an accolade that is gained only through completion of the BB training and some additional requirements (see below under training). The intent of this designation is to make sure that there is a person in the organization who can serve as an internal consultant (after the outside consultants are long gone) and to provide a consistent message in the organization. A shogun (MBB) is a master facilitator and resource person for the BB and coordinates project selection with the champion as well as lends a helping hand whenever needed. The primary function, however, as we see it, is to make sure that appropriate and applicable tools and methodologies are applied to resolve issues and concerns in the organization and to introduce new methods and tools in either existing problems or new ones. The MBB is considered to be an expert in the philosophy, methodology, and tools of the Six Sigma approach to resolving problems.
INSTRUCTIONAL OBJECTIVES — SHOGUN (MASTER BLACK BELT) RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Interpret the expression y = f(x). • Provide examples of the y, x, and n terms in the expression y = f(x,n). • Interpret the expression y = f(x,n). Business Metrics • • • • •
Define the nature of a performance metric. Identify the driving need for performance metrics. Explain the benefit of plotting performance metrics on a log scale. Provide a listing of several key performance metrics. Identify the fundamental contents of a performance metrics manual.
181
SL316XCh09Frame Page 182 Monday, September 30, 2002 8:10 PM
182
Six Sigma and Beyond: The Implementation Process
• • • • • • • • •
Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization. State at least three problems (or severe limitations) inherent in the current cost-of-quality theory. Identify and define the principal categories associated with quality costs. Compute the cost-of-quality (COQ) given the necessary background data. Provide a detailed explanation of how a defect can impact the classical cost-of-quality categories.
Six Sigma Fundamentals • • • • • • • • • • • • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Understand the role of questions in the context of management leadership. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and Six Sigma. Recognize that defects arise from variation. Define the three primary sources of variation in a product. Describe the general methodologies that are required to progress through the hierarchy of quality improvement. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand the key success factors related to the attainment of Six Sigma. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Provide a very general description of how a process capability study is conducted and interpreted. Understand the basic elements of a sigma benchmarking chart. Interpret a data point plotted on a sigma benchmarking chart. Understand the difference between the idea of benchmark, baseline, and entitlement cycle time.
SL316XCh09Frame Page 183 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
183
• Provide a brief description for the outcome 1 – Y.rt • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work-in-process (WIP) is highly correlated to the rate of defects. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Rationalize the statement: the highest quality producer is the lowest cost producer. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general findings that tend to characterize or profile a four sigma organization. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Provide a qualitative definition and graphical interpretation of the standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Define the two primary components of process breakthrough. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough.
SL316XCh09Frame Page 184 Monday, September 30, 2002 8:10 PM
184
Six Sigma and Beyond: The Implementation Process
• Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain the interrelationship between the terms process capability, process precision, and process accuracy. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from quality, cost, and cycle-time points of view. • Understand that the term sigma is a performance metric that applies only at the opportunity level.
DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Provide a specific explanation of what is meant by the term blocking variable and explain when such variables should be used in an experiment. Opportunities for Defects • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Provide a rational definition of a defect. • Recognize the difference between uniform and random defects. • Compute the defect-per-unit metric given a specific number of defects and units produced. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas.
SL316XCh09Frame Page 185 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
185
• Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines • Conduct a complete baseline capability analysis (using some software), interpret the results, and make valid recommendations. Six Sigma Projects • Interpret each of the action steps associated with the four phases of process breakthrough. • Explain why the five key planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Create a set of criteria for selecting and scoping Six Sigma BB projects. • Define a Six Sigma BB project reporting and review process. Six Sigma Deployment • Provide a brief description as to the nature of a Six Sigma black belt (SSBB). • Describe the role and responsibilities of a SSBB. • Understand the SSBB instructional curriculum. • Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy. • Recognize the importance of and provide a description for the plan-train-apply-review (PTAR) learning process. • Provide a brief description of the nature of a Six Sigma champion (SSC). • Describe the roles and responsibilities of a SSC. • Provide a brief description of the nature of a Six Sigma master black belt (SSMBB). • Describe the roles and responsibilities of a SSMBB. • Provide a brief description of the key implementation principles and identify principle deployment success factors. • List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. • Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. • Develop a business model that incorporates and exploits the benefits of Six Sigma.
SL316XCh09Frame Page 186 Monday, September 30, 2002 8:10 PM
186
Six Sigma and Beyond: The Implementation Process
MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics. • Explain why survey questions that utilize the five-point Likert scale must often be reduced to two categories during analysis. Data Collection • Provide a specific explanation of what is meant by the term replicate in the context of a statistically designed experiment. • Explain why there is a need to randomize the sequence of order in which an experiment takes place and what can happen when this is not done. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. • Explain how a statistically designed single-factor experiment can be used to study and control for the influence of measurement error. • Explain how full factorial experiments can be employed to study and control for the influence of measurement error. • Explain how fractional factorial experiments can be used to study and control for the influence of measurement error. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Identify the circumstances under which the Poisson distribution could be applied to the analysis of product or transactional defects. • Understand the applied differences between the Poisson and binomial distributions. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Construct a histogram for a set of nonnormal data and isolate a transformation that will force the data to a normal condition. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal.
SL316XCh09Frame Page 187 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
187
Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation given a set of data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the corresponding Z value of a specification limit given an appropriate set of data. • Convert a Z value into a defect probability given a table of areas under a normal curve. • Compute the mean, standard deviation, and variance for a set of normally distributed data. • Compute Z.usl and Z.lsl for a set of nonnormal data with upper and lower specifications and then determine the probability of defect. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. • Compute Z.usl and Z.lsl for a set of normally distributed data and then determine the probability of defect. Dynamic Statistics • Compute and interpret the total, inter-, and intragroup sums-of-squares for a given set of data. • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Provide a practical explanation of what could account for a differential between a short-term Z value and a long-term Z value. • Explain the difference between dynamic mean variation and static mean offset. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process-centering conditions and explain how each impacts process capability. • Compute and interpret the intra-, inter-, and total group sums of squares for a set of normally distributed data organized into rational subgroups.
SL316XCh09Frame Page 188 Monday, September 30, 2002 8:10 PM
188
Six Sigma and Beyond: The Implementation Process
ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Compute the normalized yield (Y.norm) given a rolled-throughput yield (Y.rt) value and a specific number of defect opportunities. • Compute the total defects-per-unit (TDPU) value given a rolled-throughput yield (Y.rt) value. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). • Construct a benchmarking chart using computer software. • List several sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product/process. • Illustrate how a system-level DPU goal can be flowed down through a product/process hierarchy to assess the required CTQ capability. • Illustrate how a series of CTQ capability values can be flowed up through a product/process hierarchy to establish the system DPU.
PROCESS METRICS • • • •
Compute and interpret the Cp index of capability. Compute and interpret the Cpk index of capability. Explain the theoretical and practical differences between Cp, Cpk, Pp, Ppk. Explain why a Z can be used to measure process capability and its relationship to indices such as Cp, Cpk, Pp, Ppk. • Recognize that a 1.5 sigma shift between sampling periods is typical and therefore can be used when quantification is not possible. • Understand the general guidelines for adjusting a Z value for the influence of shift and drift (when to add or subtract the shift value).
SL316XCh09Frame Page 189 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
189
• Compute the Cp and Cpk indices for a set of normally distributed data with upper and lower performance limits. • Explain why Cpk values will often not correlate to first-time yield information. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups. • Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions. • Show how Z.st, Z.lt, Z.shift (dynamic), and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret a computerized characterization report. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools • Understand, construct, and interpret a multivariate chart and then identify areas of application. Simulation Tools • Describe what is meant by the term Monte Carlo simulation and demonstrate how it can be used as a design tool. • Create a series of random normal numbers with a given mean and variance. • Create k sets of subgroups, where each subgroup consists of n samples from a normal distribution with a given mean and variance. • Create a series of random lognormal numbers and then transform the data to fit a normal density function. • Explain why the experimental manipulation of a computer simulator will often yield heteroscedastic relationships. Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and the benefits of doing so. • Explain what statistical hypotheses are and why they are created and show the forms they may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequences. • Define the concept of statistical confidence and explain how it relates to alpha risk.
SL316XCh09Frame Page 190 Monday, September 30, 2002 8:10 PM
190
Six Sigma and Beyond: The Implementation Process
• Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain what is meant by the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta. • Explain what is meant by the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as a practical perspective. • List the 15 essential steps for successfully conducting a statistically based investigation of a practical real-world problem. • Provide a detailed understanding of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a conceptual understanding of what a statistical confidence interval is and how it relates to the notion of random sampling error. • Understand what the distribution of sample averages is and how it relates to the central limit theorem. • Explain what the standard error of the mean is and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations. • Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand what the distribution of sample differences is and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of a one- and two-sample t test and apply this test to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution.
SL316XCh09Frame Page 191 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
191
• Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal. • Provide a general description of the term experimental error and explain how it relates to the term replication. • Recognize that when the intratreatment replicates are correlated there is an adverse impact on experimental error. • Provide a general description of one-way analysis of variance and discuss the role of sample size in it. • Demonstrate how the total variation in single-factor experiments can be characterized analytically and graphically. • Demonstrate how the experimental error in an experiment can be partitioned from the total error for independent consideration. • Demonstrate how the intergroup variation in an experiment can be partitioned from the total error for independent consideration. • Compute the total sums of squares as well as the intragroup and intergroup sums of squares for a single-factor experiment. • Define how degrees of freedom are established for each source of variation in a single-factor experiment. • Organize the sums of squares and degrees of freedom into an ANOVA table and compute the mean square ratios. • Determine the random sampling error probability related to any given mean square ratio and illustrate the effect of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact if they are violated. • Compute all post-hoc comparisons (i.e., pairwise t tests) in cases where an F value proves to be statistically significant. • Compute and interpret the relative effect (i.e., sensitivity) of an experimental factor, create a main effects plot, and set tolerances. Discrete Decision Tools • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution. • Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • Compute the expected cell frequencies for any given contingency table. • Compute the chi-square statistic for a 2 × 2 contingency table and determine the probability of chance sampling error. • Determine the extent of association for a 2 × 2 contingency table using the contingency coefficient. • Compute the chi-square statistic for an n-way contingency table and determine the probability of chance sampling error. • Illustrate how the chi-square statistic and cross tabulation can be utilized in the analysis of surveys.
SL316XCh09Frame Page 192 Monday, September 30, 2002 8:10 PM
192
Six Sigma and Beyond: The Implementation Process
• List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied. • Understand how the probability of a given chi-square value can be determined. • Recognize that the chi-square statistic can be employed as a goodness-of-fit test as well as a test of independence. • Understand the nature of discontinuity and how to apply Yates correction to compensate for this effect. • Recognize that the square root of a chi-square is equal to Z for the special case of df = 1. • Recognize that the cross tabulation of two classification variables, each with two categories, is referred to as a 2 × 2 contingency table. • Explain how to establish the degrees of freedom associated with any contingency table.
IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Explain the primary differences between a random-effects model and a fixed-effects model. • Identify the four principal families of experimental designs and what each family of designs is used for. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • Provide a specific explanation of what is meant by the term confounding and identify several ways to control for this situation. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. • Explain how the settings (i.e., levels) of an experimental factor can significantly influence the outcome of an experiment. • Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain what is meant by the term full factorial experiment and how it differs from a fractional factorial experiment. • Describe the overriding limitations of the classical test plan when two factors are involved and state several advantages of a full factorial design. • Show at least four ways that a two-factor, two-level full factorial design matrix can be displayed and communicated.
SL316XCh09Frame Page 193 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
193
• Understand the added value of a balanced and orthogonal design and the practical implications when these properties are not present. • Explain what is meant by the phrase hidden replication and understand that this phenomenon does not preclude the a priori consideration of sample size. • Construct the vectored columns for a two-factor, two-level full factorial design given Yates standard order. • Explain what is meant by the phrase column contrast and show how it can be used to establish the factor effect and the related sums of squares. • Construct and interpret a main-effects plot for a two-factor, two-level experiment and display the 95% confidence intervals on the plot. • Construct and interpret an interaction plot for a two-factor, two-level experiment and display the 95% confidence intervals on the plot. • Compute the sums of squares associated with each experimental effect in a two-factor, two-level full factorial experiment. • Create an ANOVA table and compute the mean square ratios for each experimental effect in a two-factor, two-level full factorial experiment. • Determine the random sampling error probability for any given mean square ratio in a two-factor, two-level full factorial experiment. • Compute the relative effect for each experimental effect and display the results on a Pareto chart. • Implement center point(s) within a two-factor, two-level full factorial experiment and estimate whether there is any statistically significant curvature. • Design and conduct a two-factor multilevel full factorial experiment and interpret the outcome from a statistical and a practical perspective. • Provide a general description of a fractional factorial experiment and the inherent advantages that fractional arrays offer. • Understand why third-order and higher effects are most often statistically and practically insignificant. • Create a half fraction of a full factorial experiment by sorting based on the highest order interaction and then discern the pattern of confounding. • Recognize how an unreplicated fractional factorial design can be folded into a full factorial design with replication. • List the unique attributes associated with fractional factorial designs of resolution III, IV, and V. • Explain what happens to the experimental error term when a factor is collapsed out of the matrix by folding. • Explain how Plackeft–Burman experimental designs are used and discuss their unique strengths and weaknesses. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response means as a basis for the plot. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response variances as a basis for the plot. • Compute the sums of squares associated with each experimental effect in a fractional factorial experiment.
SL316XCh09Frame Page 194 Monday, September 30, 2002 8:10 PM
194
Six Sigma and Beyond: The Implementation Process
• Create an ANOVA table and compute the mean square ratio for each experimental effect in a fractional factorial experiment. • Determine the random sampling error probability for any given MSR in a fractional factorial experiment. • Compute the relative effect for each experimental effect in a fractional factorial experiment and display the results in a Pareto chart. • Utilize the Taguchi orthogonal arrays to study the influence of several key process variables on a given response characteristic. Robust Design Tools • Provide a brief description of the term robust design and explain why and when process capability data must be factored into the design process. • Recognize that such phenomena as heteroscedasticity, variable interactions, and nonlinearities can be used to reduce white noise. • Explain what is meant by the term robustness and how this understanding translates to experimental design and process tolerancing. • Illustrate how a main-effects plot, as related to a two-factor two-level experiment, can be used as a basis for tolerancing. • Illustrate how an interaction plot, as related to a two-factor two-level experiment, can be used as a basis for achieving robust performance. • Describe what an outer array is in relation to a full or fractional factorial experiment design. • Describe what an inner array is in relation to a full or fractional factorial experiment design. • Utilize an inner or outer array to desensitize the response variable to a selected independent variable. • Illustrate how an independent variable can be manipulated within an inner or outer array design to yield a robust operating condition. • Provide a statistical explanation of the term heteroscedasticity and discuss its practical implications. • Illustrate how heteroscedasticity can be leveraged to achieve robust performance. • Illustrate how a nonlinear effect correlates the mean to the variance and how this effect can be leveraged to achieve robust performance. • Illustrate how a nonlinear effect can be leveraged to reduce the response variance and how a linear effect can be used to center the response mean.
EMPIRICAL MODELING TOOLS Tolerance Tools • Create a graphical explanation of how performance tolerances can be defined using the results of a two-level factorial experiment. • Demonstrate why worst-case tolerance analysis is an overly conservative and costly design tool.
SL316XCh09Frame Page 195 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
195
Risk Analysis Tools • Compute the standard deviation for a linear sum of variances and explain why the variances must be independent. • Compute the system-level defect probability given the subsystem means, variances (of a linear model), and relevant performance specifications. • Describe how root-sums-of-squares (RSS) can be used as a design-to-cost tool and how it can be employed to analyze and optimize process cycle time. • Demonstrate how the Six Sigma risk assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data. • Explain what is meant by the phrase root-sums-of-squares (RSS) and apply this principle to a linear series of error sources. • Compute Z.gap for a linear series of error sources and then optimize Z.gap to a specific value (using computer software). DFSS Principles • Understand and explain the expression Y = f(x1, x2,..xn, n). • Understand the fundamental ideas underlying the notion of manufacturability. • Understand how product and process complexity impacts design performance. • Understand how statistically designed experiments can be used to identify leverage variables, establish sensitivities, and define tolerances. • Explain the concept of error propagation (both linear and nonlinear) and what role product and process complexity plays in it. • Describe how reverse error propagation can be employed during system design. • Explain why process shift and drift must be considered in the analysis of a design and how it can be factored into design optimization. • Describe how Six Sigma tools and methods can be applied in and of itself to the design process. • Discuss the pros and cons of the classical approach to product and process design relative to that of the Six Sigma approach.
CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts.
SL316XCh09Frame Page 196 Monday, September 30, 2002 8:10 PM
196
Six Sigma and Beyond: The Implementation Process
Continuous SPC Tools • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. • Provide a conceptual understanding of each step associated with the general cookbook for control charts. • Explain how the use of rational subgroups forces the nonrandom variations due to assignable causes to appear between sampling periods. • Explain how the control limits of an SPC chart are directly linked to the concepts associated with hypothesis testing. • Construct and interpret an xbar and an R chart for a set of normally distributed data organized into rational subgroups. • Illustrate how an xbar and an R chart can be used to study and control for measurement error and contrast this with the DOE/ANOVA method. • Construct and interpret an xbar and an R chart for a set of data (organized into rational subgroups) that are not normally distributed within groups. • Construct and interpret an individual’s chart for a set of normally distributed data collected over time. • Construct and interpret an individual’s chart for a set of nonnormally distributed data collected over time. • Construct and interpret an exponentially weighted moving average (EWMA) chart and highlight its advantages and disadvantages. • Provide a detailed understanding of how to adjust a process parameter using the method of bracketing and contrast this technique to other methods. Discrete SPC Tools • Construct and interpret a p chart and explain how the control limits for this chart are related to the confidence intervals of the binomial distribution. • Construct and interpret a U chart and explain how the control limits for this chart are related to confidence intervals for the Poisson distribution.
TRAINING A shogun, or master black belt, is a black belt plus. There is no special training for a Six Sigma master black belt. However, there are additional requirements that must be fulfilled. These requirements, although identified in the previous section, are not consistent everywhere and do not carry a certain weight of certification. A black belt is certified by the champion and another shogun or master black belt. Shoguns are organization-dependent. (This is very important. It means that a shogun in one organization is not necessarily a shogun in another, and vice versa.) The requirements for such certification vary from organization to organization. However, typically a shogun must be able to do or must have accomplished at least the following:
SL316XCh09Frame Page 197 Monday, September 30, 2002 8:10 PM
Six Sigma for Master Black Belts
• • • • • • •
197
Complete four to seven verifiable projects. Be an expert in the tools and methodology of Six Sigma. Be able to give a detailed report on the project. Be able to communicate with executives as well as operators. Be able to “get along” with the champion. Be able to teach, coach, and facilitate black belts and green belts. Be able to create training material for the purpose of training others in the Six Sigma methodology. • Be able to create new tools for solving problems or apply old tools to new applications.
SL316XCh09Frame Page 198 Monday, September 30, 2002 8:10 PM
SL316XCh10Frame Page 199 Monday, September 30, 2002 8:09 PM
10
Six Sigma for Black Belts
This stage of the Six Sigma implementation process is designed to provide intensive training to individuals who are about to become black belts (BBs) in the organization. The training is 4 weeks, spread over several months, and teaches the prospective black belt the new philosophy, statistics and their application, change management, project orientation, conflict resolution, and many other tools and methodologies to enhance customer satisfaction and organizational profitability. This is the most intensive and in-depth training of the whole series as the black belt is expected to facilitate change in the organization. The BB is expected to come in contact with executives, middle management, and operators. Therefore, their selection is very critical and important to the success of the project as well as the Six Sigma philosophy in the given organization. Training for BBs must be based on both why do it as well as how to do it. It is often suggested that several simulated exercises and problems may be sprinkled throughout the course to make the key points more emphatic. Generally, there are also significant homework assignments for more detailed analyses and indepth evaluations. Traditional exercises may deal with definitions and their applications, process mapping, all statistical applications, FMEAs, cost–benefit analysis, and familiarization of a computer software package, several of which are available — SAS, SPSS, MINITAB, and others. Because organizations and their goals are quite different we will provide the reader with a suggested outline of the training material for this transactional, technical, and manufacturing BB training. In all cases, the training should be in 5-day blocks over several months and be taught by either a consultant, shogun (master black belt), or other seasoned BB. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series.
INSTRUCTIONAL OBJECTIVES — BLACK BELT RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Provide examples of the y and x terms in the expression y = f(x,n). • Interpret the expression y = f(x) and y = f(x,n). 199
SL316XCh10Frame Page 200 Monday, September 30, 2002 8:09 PM
200
Six Sigma and Beyond: The Implementation Process
Business Metrics • • • • • • • • • • • • • •
Define the nature of a performance metric. Identify the driving need for performance metrics. Explain the benefit of plotting performance metrics on a log scale. Provide a listing of several key performance metrics. State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. Identify and define the principal categories associated with quality costs. Compute the COQ given the necessary background data. Provide a detailed explanation of how a defect can impact the classical COQ categories. Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization.
Six Sigma Fundamentals • • • • • • • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Provide a brief history of Six Sigma and its evolution. Understand the need for measuring those things that are critical to the customer, business, and process. Define the various facets of Six Sigma and why Six Sigma is important to a business. Identify the parts-per-million defect goal of Six Sigma. Define the magnitude of difference between three, four, five, and Six Sigma. Recognize that defects arise from variation. Define the three primary sources of variation in a product. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Provide a very general description of how a process capability study is conducted and interpreted. Understand the difference between the idea of benchmark, baseline, and entitlement cycle time.
SL316XCh10Frame Page 201 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
201
• Provide a brief description for the outcome 1 – Y.rt • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. • Understand that work in process (WIP) is highly correlated to the rate of defects. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Explain how process capability impacts the pattern of failure inherent to the infant mortality rate. • Provide a rational definition of the term latent defect and how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Rationalize the statement “The highest quality producer is the lowest cost producer.” • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts, • Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. • Draw first-order conclusions when given a global benchmarking chart. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • State the general findings that tend to characterize or profile a four sigma organization. • Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. • Provide a qualitative definition and graphical interpretation of standard deviation. • Understand the driving need for breakthrough improvement vs. continual improvement. • Define the two primary components of process breakthrough. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough. • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain the interrelationship between the terms process capability, process precision, and process accuracy.
SL316XCh10Frame Page 202 Monday, September 30, 2002 8:09 PM
202
Six Sigma and Beyond: The Implementation Process
• Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from the point of view of quality, cost, and cycle-time. • Understand that the term sigma is a performance metric that applies only at the opportunity level. • Understand the role of questions in the context of management leadership. • Describe the general methodologies that are required to progress through the hierarchy of quality improvement. • Understand the key success factors related to the attainment of Six Sigma. • Understand the basic elements of a sigma benchmarking chart. • Interpret a data point plotted on a sigma benchmarking chart. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed.
DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Provide a specific explanation of the term blocking variable and explain when such a variable should be used in an experiment. Opportunities for Defects • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Provide a rational definition of defect. • Compute the defect-per-unit metric given a specific number of defects and units produced. • Recognize the difference between uniform and random defects. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality.
SL316XCh10Frame Page 203 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
203
Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT Tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines • Conduct a complete baseline capability analysis (using a computer software), interpret the results, and make valid recommendations. Six Sigma Projects • Interpret each of the action steps associated with the four phases of process breakthrough. • Explain why the five key planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Define Six Sigma black belt project reporting and review process. • Create a set of criteria for selecting and scoping Six Sigma black belt projects. Six Sigma Deployment • • • • • • • • • • • • •
Provide a brief description of a Six Sigma Black Belt (SSBB). Describe the role and responsibilities of a SSBB. Understand the SSBB instructional curriculum. Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy. Recognize the importance and provide a description of the plan-train-apply-review (PTAR) learning process. Provide a brief description of a Six Sigma Champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of a Six Sigma Master Black Belt (SSMBB). Describe the roles and responsibilities of a SSMBB. Provide a brief description of the key implementation principles and identify principal deployment success factors. List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. Develop a business model that incorporates and exploits the benefits of Six Sigma.
SL316XCh10Frame Page 204 Monday, September 30, 2002 8:09 PM
204
Six Sigma and Beyond: The Implementation Process
MEASURE Scales of Measure • Identify the four primary scales of measure and provide a brief description of their unique characteristics. • Explain why survey questions that utilize the five-point Likert scale must often be reduced to two categories during analysis. Data Collection • Provide a specific explanation of the term replicate in the context of a statistically designed experiment. • Explain why there is a need to randomize the sequence of order in which an experiment takes place and what can happen when this is not done. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. • Explain how a statistically designed single-factor experiment can be used to study and control for the influence of measurement error. • Explain how full factorial experiments can be employed to study and control for the influence of measurement error. • Explain how fractional factorial experiments can be used to study and control for the influence of measurement error. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal. • Identify the circumstances under which the Poisson distribution could be applied to the analysis of product or transactional defects. • Understand the applied differences between the Poisson and binomial distributions. • Construct a histogram for a set of nonnormal data and isolate a transformation that will force the data to a normal condition.
SL316XCh10Frame Page 205 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
205
Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation given a set of data. • Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the corresponding Z value of a specification limit given an appropriate set of data. • Convert a Z value into a defect probability given a table of areas under the normal curve. • Compute the mean, standard deviation, and variance for a set of normally distributed data. • Compute Zusl and Z.lsl for a set of nonnormal data with upper and lower specifications and then determine the probability of defect. • Provide a graphical understanding of the standard deviation and explain why it is so important to Six Sigma work. • Compute Zusl and Z.lsl for a set of normally distributed data and then determine the probability of defect. Dynamic Statistics • Compute and interpret the total, inter-, and intragroup sums of squares for a given set of data. • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Provide a practical explanation of what could account for a differential between a short-term Z value and a long-term Z value. • Explain the difference between dynamic mean variation and static mean offset. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process centering conditions and explain how each impacts process capability. • Compute and interpret the intra-, inter-, and total sums of squares for a set of normally distributed data organized into rational subgroups.
SL316XCh10Frame Page 206 Monday, September 30, 2002 8:09 PM
206
Six Sigma and Beyond: The Implementation Process
ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). • Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (Y.rt) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Compute the total defects-per-unit (TDPU) value given a rolled-throughput yield (Y.rt) value. • Construct a benchmarking chart using computer software. • List at least five separate sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product/process. • Illustrate how a system-level DPU goal can be flowed down through a product/process hierarchy to assess the required CTQ capability. • Illustrate how a series of CTQ capability values can be flowed up through a product/process hierarchy to establish the system DPU. • Compute the Normalized Yield (Y.norm) given a rolled-throughput yield (Y.rt) value and a specific number of defect opportunities. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). Process Metrics • Compute and interpret the Cp index of capability. • Compute and interpret the Cpk index of capability. • Explain the theoretical and practical differences between Cp, Cpk, Pp, and Ppk. • Explain why a Z can be used to measure process capability and its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Recognize that a 1.5 sigma shift between sampling periods is typical and therefore can be used when quantification is not possible. • Understand the general guidelines for adjusting a Z value for the influence of shift and drift (when to add or subtract the shift value).
SL316XCh10Frame Page 207 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
207
• Compute the Cp and Cpk indices for a set of normally distributed data with upper and lower performance limits. • Explain why Cpk values often will not correlate to first-time yield information. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups. • Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, and Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions. • Show how Z.st, Z.lt, Z.shift (dynamic) and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret the standardized computerized process characterization report. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools • Understand, construct, and interpret a multi-vari chart and identify areas of application. Simulation Tools • Describe what is meant by the term Monte Carlo simulation and demonstrate how it can be used as a design tool. • Create a series of random normal numbers with a given mean and variance. • Create k sets of subgroups where each subgroup consists of n samples from a normal distribution with a given mean and variance. • Create a series of random lognormal numbers and then transform the data to fit a normal density function. • Explain why the experimental manipulation of a computer simulator will often yield heteroscedastic relationships. Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and describe the benefits of doing so. • Explain what a statistical hypothesis is and why it is created and show the forms it may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequences. • Define the concept of statistical confidence and explain how it relates to alpha risk.
SL316XCh10Frame Page 208 Monday, September 30, 2002 8:09 PM
208
Six Sigma and Beyond: The Implementation Process
• Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta. • Explain the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as practical perspective. • List the essential steps for successfully conducting a statistically based investigation of a practical real world problem. • Provide a detailed understanding of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a conceptual understanding of statistical confidence interval and how it relates to the notion of random sampling error. • Understand distribution of sample averages and how it relates to the central limit theorem. • Explain standard error of the mean and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations. • Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand distribution of sample differences and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of a one- and two-sample t test and apply this test to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution. • Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal.
SL316XCh10Frame Page 209 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
209
• Provide a general description of the term experimental error and explain how it relates to the term replication. • Recognize that when the intratreatment replicates are correlated there is an adverse impact on experimental error. • Provide a general description of one-way analysis of variance and discuss the role of sample size in it. • Demonstrate how the total variation in single-factor experiments can be characterized analytically and graphically. • Demonstrate how the experimental error in an experiment can be partitioned from the total error for independent consideration. • Demonstrate how the intergroup variation in an experiment can be partitioned from the total error for independent consideration. • Compute the total sums of squares as well as the intragroup and intergroup sums of squares for a single-factor experiment. • Define how degrees of freedom are established for each source of variation in a single-factor experiment. • Organize the sums of squares and degrees of freedom into an ANOVA table and compute the mean square ratios. • Determine the random sampling error probability related to any given mean square ratio and illustrate the effect of sample size. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact if they are violated. • Compute all post-hoc comparisons (i.e., pairwise t tests) in the instance that an F value proves to be statistically significant. • Compute and interpret the relative effect (i.e., sensitivity) of an experimental factor, create a main-effects plot, and set tolerances. Discrete Decision Tools • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution. • Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • Compute the expected cell frequencies for any given contingency table. • Compute the chi-square statistic for a 2 × 2 contingency table and determine the probability of chance sampling error. • Determine the extent of association for a 2 × 2 contingency table using the contingency coefficient. • Compute the chi-square statistic for an n-way contingency table and determine the probability of chance sampling error. • Illustrate how the chi-square statistic and cross-tabulation can be utilized in the analysis of surveys. • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer.
SL316XCh10Frame Page 210 Monday, September 30, 2002 8:09 PM
210
Six Sigma and Beyond: The Implementation Process
• Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied. • Understand how the probability of a given chi-square value can be determined. • Recognize that the chi-square statistic can be employed as a goodness-of-fit test as well as a test of independence. • Recognize that the square root of a chi-square is equal to Z for the special case where df = 1. • Recognize that the cross-tabulation of two classification variables, each with two categories, is referred to as a 2 × 2 contingency table. • Explain how to establish the degrees of freedom associated with any contingency table. • Understand the nature of discontinuity and how to apply a Yates correction to compensate for this effect.
IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Explain the primary differences between a random-effects model and a fixed-effects model. • Identify the four principal families of experimental designs and what each family of designs is used for. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • Provide a specific explanation of what is meant by the term confounding and identify several ways to control for this situation. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. • Explain how the settings (i.e., levels) of an experimental factor can significantly influence the outcome of an experiment. • Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain what is meant by the term full factorial experiment and how it differs from a fractional factorial experiment. • Describe the overriding limitations of the classical test plan when two factors are involved and state several advantages of a full factorial design. • Show at least four ways that a two-factor, two-level full factorial design matrix can be displayed and communicated.
SL316XCh10Frame Page 211 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
211
• Understand the added value of a balanced and orthogonal design and the practical implications when these properties are not present. • Explain the phrase hidden replication and understand that this phenomenon does not preclude the a priori consideration of sample size. • Construct the vectored columns for a two-factor, two-level full factorial design, given Yates standard order. • Explain the phrase column contrast and show how it can be used to establish the factor effect and the related sums of squares. • Construct and interpret a main-effects plot for a two-factor, two-level experiment and display the 95% confidence intervals on the plot. • Construct and interpret an interaction plot for a two-factor two-level experiment and display the 95% confidence intervals on the plot. • Compute the sums of squares associated with each experimental effect in a two-factor, two-level full factorial experiment. • Create an ANOVA table and compute the mean squares ratios for each experimental effect in a two-factor, two-level full factorial experiment. • Determine the random sampling error probability for any given mean square ratio in a two-factor, two-level full factorial experiment. • Compute the relative effect for each experimental effect and display the results on a Pareto chart. • Implement center point(s) within a two-factor, two-level full factorial experiment and estimate whether there is any statistically significant curvature. • Design and conduct a two-factor, multilevel full factorial experiment and interpret the outcome from a statistical and practical perspective. • Provide a general description of a fractional factorial experiment and the inherent advantages that fractional arrays offer. • Understand why third-order and higher effects are most often statistically and practically insignificant. • Create a half fraction of a full factorial experiment by sorting on the highest order interaction and then discern the pattern of confounding. • Recognize how an unreplicated fractional factorial design can be folded into a full factorial design with replication. • List the unique attributes associated with fractional factorial designs of resolution III, IV, and V. • Explain what happens to the experimental error term when a factor is collapsed out of the matrix by folding. • Explain how Plackett-Burman experimental designs are used and discuss their unique strengths and weaknesses. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response means as a basis for the plot. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response variances as a basis for the plot. • Compute the sums of squares associated with each experimental effect in a fractional factorial experiment.
SL316XCh10Frame Page 212 Monday, September 30, 2002 8:09 PM
212
Six Sigma and Beyond: The Implementation Process
• Create an ANOVA table and compute the mean square ratio for each experimental effect in a fractional factorial experiment. • Determine the random sampling error probability for any given MSR in a fractional factorial experiment. • Compute the relative effect for each experimental effect in a fractional factorial experiment and display the results in a Pareto chart. • Utilize the Taguchi orthogonal arrays to study the influence of several key process variables on a given response characteristic. Robust Design Tools • Provide a brief description of the term robust design and explain why and when process capability data must be factored into the design process. • Recognize that such phenomenon as heteroscedasticity, variable interactions, and nonlinearities can be used to reduce white noise. • Explain what is meant by the term robustness and explain how this understanding translates to experimental design and process tolerancing. • Illustrate how a main-effects plot, as related to a two-factor two-level experiment, can be used as a basis for tolerancing. • Illustrate how an interaction plot, as related to a two-factor two-level experiment, can be used as a basis for achieving robust performance. • Describe what an outer array is in relation to a full or fractional factorial experiment design. • Describe what an inner array is in relation to a full or fractional factorial experiment design. • Utilize an inner/outer array to desensitize the response variable to a selected independent variable. • Illustrate how an independent variable can be manipulated within an inner/outer array design to yield a robust operating condition. • Provide a statistical explanation of the term heteroscedasticity and discuss its practical implications. • Illustrate how heteroscedasticity can be leveraged to achieve robust performance. • Illustrate how a nonlinear effect correlates the mean to the variance and how this effect can be leveraged to achieve robust performance. • Illustrate how a nonlinear effect can be leveraged to reduce the response variance and how a linear effect can be used to center the response mean.
EMPIRICAL MODELING TOOLS Tolerance Tools • Create a graphical explanation of how performance tolerances can be defined using the results of a two-level factorial experiment. • Demonstrate why worst-case tolerance analysis is an overly conservative and costly design tool.
SL316XCh10Frame Page 213 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
213
Risk Analysis Tools • Compute the standard deviation for a linear sum of variances, and explain why the variances must be independent. • Compute the system-level defect probability given the subsystem means, variances (of a linear model), and relevant performance specifications. • Describe how root-sums-of-squares (RSS) can be used as a design-to-cost tool and how it can be employed to analyze and optimize process cycle time. • Demonstrate how the Six Sigma risk assessment methodology can be applied to engineering, manufacturing, transactional, and commercial problems. • List the disadvantages associated with worst-case analysis and compute the probability of worst case given the process capability data. • Explain RSS and apply this principle to a linear series of error sources. • Compute Z.gap for a linear series of error sources and then optimize Z.gap to a specific value (using computer software). DFSS Principles • Understand and explain the expression Y = f(x1, x2,…,xn, n). • Understand how statistically designed experiments can be used to identify leverage variables, establish sensitivities, and define tolerances. • Understand the fundamental ideas underlying the notion of manufacturability. • Understand how product and process complexity impacts design performance. • Explain the concept of error propagation (both linear and nonlinear) and what role product and process complexity plays. • Describe how reverse error propagation can be employed during system design. • Explain why process shift and drift must be considered in the analysis of a design and how it can be factored into design optimization. • Describe how Six Sigma tools and methods can be applied to the design process. • Discuss the pros and cons of the classical approach to product and process design relative to the Six Sigma approach.
CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts.
SL316XCh10Frame Page 214 Monday, September 30, 2002 8:09 PM
214
Six Sigma and Beyond: The Implementation Process
Continuous SPC Tools • Explain the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. • Provide a conceptual understanding of each step associated with the general cookbook for control charts. • Explain how the use of rational subgroups forces the nonrandorn variations due to assignable causes to appear between sampling periods. • Explain how the control limits of an SPC chart are directly linked to the concepts associated with hypothesis testing. • Construct and interpret an Xbar and R chart for a set of normally distributed data organized into rational subgroups. • Illustrate how an Xbar and R chart can be used to study and control for measurement error and contrast this with the DOE/ANOVA method. • Construct and interpret an Xbar and R chart for a set of data (organized into rational subgroups) that are not normally distributed within groups. • Construct and interpret an individual’s chart for a set of normally distributed data collected over time. • Construct and interpret an individual’s chart for a set of non-normally distributed data collected over time. • Construct and interpret an exponentially weighted moving average (EWMA) chart and highlight its advantages and disadvantages. • Provide a detailed understanding of how to adjust a process parameter using the method of bracketing and contrast this technique to other methods. Discrete SPC Tools • Construct and interpret a p chart and explain how the control limits for this chart are related to the confidence intervals of the binomial distribution. • Construct and interpret a U chart and explain how the control limits for this chart are related to confidence intervals for the Poisson distribution.
CONTENT OF BLACK BELT TRAINING — OUTLINE Based on the above general objectives, it is recommended that the training follow the content format below. By no means is this the only format. However, we believe that the content follows a hierarchical sequence, and in that way we have attempted to accommodate the learning process. We have separated the training in three distinct categories, even though the objectives are the same for all. The difference is on the emphasis of the material. The reader is also encouraged to review the end of Chapter 8 dealing with the expected milestones in reference to a project’s progress.
SL316XCh10Frame Page 215 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
215
TRANSACTIONAL TRAINING – 4-WEEK TRAINING WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be several team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Six Sigma overview Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Vision • Philosophy • Aggressive goal • Metric (standard of measurement) • Benchmark • Method • Vehicle for: • Customer focus • Breakthrough improvement • Continual improvement • People involvement • Bottom line: Six Sigma defines the goals of a business by a) identifying projects using performance metrics that will yield clear business results and b) applying advanced quality and statistical tools to achieve breakthrough financial performance. • Defines performance metrics that tie to the business goals. Six Sigma — Goal • Customer satisfaction • Organizational profitability A problem-solving methodology Which business function needs it?
SL316XCh10Frame Page 216 Monday, September 30, 2002 8:09 PM
216
Six Sigma and Beyond: The Implementation Process
Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough strategy The foundation of Six Sigma tools • Cost of poor quality. For most companies today, the cost of poor quality is likely to be 25% of sales. In almost every company where the COPQ is unknown, the COPQ exceeds the profit margin. • What is cost of poor quality? In addition to the direct costs associated with finding and fixing defects, cost of poor quality also includes: • The hidden cost of failing to meet customer expectations the first time • The hidden opportunity for increased efficiency • The hidden potential for higher profits • The hidden loss in market share • The hidden increase in production cycle time • The hidden labor associated with ordering replacement material • The hidden costs associated with disposing of defects Getting there through inspection Defects and the hidden factory • Rolled-throughput yield vs. first-time yield What causes defects? • Excess variation due to: • Manufacturing processes • Supplier (incoming) material variation • Unreasonably tight specifications (tighter than the customer requires) Dissecting process capability Premise of Six Sigma sources of variation can be: • Identified • Quantified • Eliminated or controlled How do we Improve capability? Six Sigma metrics and continual improvement • Six Sigma is characterized by: • Defining critical business metrics • Tracking them • Improving them using proactive process improvement • Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt). Cost of poor quality and cycle time (throughput) are two other metrics.
SL316XCh10Frame Page 217 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
217
Yrt = e–dpu Continual improvement Calculating the product sigma-level Metrics • Defects per unit (DPU) drives plant-wide improvement • Defects per million opportunities (DPMO) allows for comparison of dissimilar products • Sigma level allows for benchmarking within and across companies • Tracking trends in metrics • PPM conversion chart Translating needs into requirements • Six Sigma deployment success • Affects directly quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Insures a predictable factory • Executes black belt strategy • Introduces roles and responsibilities • • • •
Describe BB execution strategy. The execution strategy is about ensuring sources of variation in manufacturing and transactional processes to be objectively identified, quantified, and controlled or eliminated. How? Through the use of the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans. The goal of the strategy, of course, is to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity–productivity. By using the DMAIC model, Kano model, QFD, and other tools and methodologies, the process allows the BB to overview the: • Steps • Tools • Deliverables Roles and responsibilities Executive • Will set meaningful goals and objectives for the corporation. • Will drive the implementation of Six Sigma publicly. Champion • Will select black belt projects consistent with corporate goals. • Will drive the implementation of Six Sigma through public support and removal of barriers. Master black belt • Be the expert in tools and concepts. • Develop and deliver training to various levels of the organization. • Certify black belts. • Assist in the identification of projects. • Coach and support BBs in project work. • Participate in project reviews to offer technical expertise.
SL316XCh10Frame Page 218 Monday, September 30, 2002 8:09 PM
218
Six Sigma and Beyond: The Implementation Process
• • • • • •
Partner with the champions. Demonstrate passion for Six Sigma. Share best practices. Take on leadership of major programs. Develop new tools or modify old tools for application. Understand the connection between Six Sigma and the business strategy. Black belt • Will deliver successful projects using the breakthrough strategy. • Will train and mentor the local organization on Six Sigma. Green belt • Will deliver successful localized projects using the breakthrough strategy. Six Sigma instructor • Will make sure all black belt candidates are certified in the understanding, usage, and application of Six Sigma tools. Phases of process improvement Define Define problem Identify customer and CTQs Update project charter Measure Identify measurement and variation Identify data and collection scheme Perform capability analysis • The analysis phase • Identify vital few Xs • Improvement phase • Determine the ideal function • Determine a preliminary transformation function • The control phase • Implement long-term control methods • Create execution plan The define phase. This phase begins with a definition of the problem and ends with a completed project charter. The steps of the development are: a) define the problem, b) identify the customer, c) identify the CTQs, d) map current process, e) refine project scope, and f) update project charter. Key tools are the a) Kano model, b) QFD, and c) process flow chart. The result: a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs, critical to the process or anything that we can control or modify about our process that will help us achieve our objectives. • The kano model identifies key behaviors. • A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs (critical tos)
SL316XCh10Frame Page 219 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
219
— CTCost, CTDelivery, CTQuality. The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. • The process flow chart helps in understanding the “as is” process • Define: potential project deliverables • An “As is” process • Preliminary project definition and scope • Project charter The Measurement Phase. This phase establishes the performance baseline. A well-defined project results in a successful project; therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies “defects” as the issue, then the objective is to reduce “defects,” and the metric to track the objective is defects. This holds true for any problem statement, objective, or metric (% defects, overtime, RTY, etc.). • Primary metric — a black belt needs to be focused. If other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. (There are times when you may achieve your objective yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order.) Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data-collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. • Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place you will not be able to determine whether you are making any improvements in your project. Establish this system such that you can historically record the data you are collecting. This information should be recorded in a database that can be readily accessed. The data should be aligned in the database in such a manner that for each output (Y) recorded the operating conditions (X) are identified. This becomes important for future reference. This
SL316XCh10Frame Page 220 Monday, September 30, 2002 8:09 PM
220
Six Sigma and Beyond: The Implementation Process
data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. • Measurement systems analysis. The purpose is to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. We are not evaluating part variability but the gauge operator capability. There are two primary guidelines, and both of them a) determine the measurement capabilities for Ys and b) need to be completed before assessing capability of Ys. These studies are called gauge repeatability and reproducibility (GR&R) studies • Measurement systems analysis (MSA) • Measurement systems evaluation (MSE) • Indices. Precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error (P/T = 10% is desirable; P/T = 30% is marginal). Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. • Capability studies. Used to establish the proportion of the operating window taken up by the natural variation of the process. • Short-term (potential) and long-term (overall) estimates of capability indices are taught. • Indices used assuming process is centered: Cp, Pp, Zst. • Indices used to evaluate shifted process: Cpk, Ppk, Zlt. Measure: potential project deliverables Project definition: • Problem description • Project metrics Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones Data collection system Measurement systems analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, sigma level, DPU, RTY) Graphical and statistical tools Project Summary Conclusions • Issues and barriers • Next steps Completed local project review Purpose of the analysis phase. To identify high-risk input variables (Xs) from the failure modes and effects analysis (FMEA); to reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA
SL316XCh10Frame Page 221 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
221
techniques; to determine the presence of and potential elimination of noise variables via multi-vari studies; to plan and document initial improvement activities. • Failure modes and effects analysis • Documents effects of failed key inputs (Xs) on key outputs (Ys) • Documents potential causes of failed key input variables (Xs) • Documents existing control methods for preventing or detecting causes • Provides prioritization for actions and documents actions taken • Can be used as the document to track project progress • Multi-vari studies. Study process inputs and outputs in a passive mode (natural day-to-day variation), specifically to identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase; to take a first look at major input variables. Multi-vari studies help select or eliminate variables for study in designed experiments. Analyze: potential project deliverables • Project definition: • Problem description • Project metrics • Passive process analysis • Graphical analysis • Multi-vari studies • Hypothesis testing • Updated PFMEA • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review Purpose of the improvement phase. The backbone of process improvement is the DOE (design of experiments). From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). This phase is characterized by a sequence of experiments, each based on the results to the previous study. Critical variables are identified during this process. Usually three to six Xs account for most of the variation in the outputs. Improve: potential project deliverables • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA
SL316XCh10Frame Page 222 Monday, September 30, 2002 8:09 PM
222
Six Sigma and Beyond: The Implementation Process
• Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review Purpose of the control phase: to optimize, eliminate, automate, or control vital few inputs; document and implement the control plan; sustain the gains identified; reestablish and monitor long-term delivered capability; implement continual improvement efforts (green belts at the functional area); execute strategy support systems; safety requirements; maintenance plans defined; system to track special causes; required and critical spare parts list; troubleshooting guides; control plans; spc charts; process monitors; inspection points; metrology control; workmanship standards; others? Control: potential project deliverables • Project definition: • Problem description • Project metrics • Optimization of Ys: • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review Rolled-throughput yield • Traditional yield view • Simple first-time yield = traditional yield • Measuring first-pass yield • Normalized yield • Six Sigma breakthrough challenge • Complexity: a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that “complexity” can be reasonably estimated by a simple count. This count is referred to as an opportunity count. In terms of quality, each product or process characteristic represents a unique “opportunity” to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar.
SL316XCh10Frame Page 223 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • • • • •
223
Formulas to know Calculating transactional yield Hidden factory Take away — rolled-throughput yield Integrates rework loops Highlights “high-loss” steps…(put project emphasis here!) DPMO: counting opportunities: • Nonvalue-add rules: no opportunity count should be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count, either. Testing, inspection, gauging, etc., do not count. The product in most cases remains unchanged. An exception is an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. • Supplied components rules: each supplied part provides one opportunity. Supplied materials, such as machine oil, coolants, etc., do not count as supplied components. • Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. • Sanity check rule: will applying counts in these operations take my business in the direction it is intended to go? If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be counter to the company objective. Hence it would not provide an opportunity. • Once you define an “opportunity,” however, you must institutionalize that definition to maintain consistency. Defining the project • Contracting with your champion • Project scope • Problem statement • Typical issues with problem statements • Project objective • Who are your customers? What is a CTQ? Where do we get CTQs? • Operational definitions • COPQ categories • Quality cost matrix • Commercial quality • What’s included? COPQ difficult to quantify • Number of hang-ups at a customer service center • Lost sales • Loss of morale • Lost capacity
SL316XCh10Frame Page 224 Monday, September 30, 2002 8:09 PM
224
Six Sigma and Beyond: The Implementation Process
• COPQ/benefits must be measurable and fall to the bottom line — prove your numbers • Establish baseline — 1 year’s activity • Calculate product cost net of margin. Do we get the sale later? Introduction to process mapping • Process mapping • Micro level • Process definition • Benefits • Versions of a process • Process map levels • Common symbols • Basic structure • The process of process mapping • Main points • Cause-and-effect matrix • Employee/team involvement • Practical concerns • C&E matrix pitfalls • FMEA overview • FMEA form • Severity ratings • Occurrence ratings • Current controls • Detection ratings • Interpreting RPN’s • Recommended actions • Important factors Introduction to data • Description and definitions • What do you want to know? • Discrete vs. continuous data • Categories of scale • Nominal scale: nominal scales of measure are used to classify elements into categories without considering any specific property. Examples of nominal scales include “causes” on fishbone diagrams, yes/no, pass/fail, etc. • Ordinal scale: ordinal scales of measure are used to order or rank nominal (pass/fail) data based on a specific property. Examples of ordinal scales include relative height, Pareto charts, customer satisfaction surveys, etc. Likert scale (ordinal): example rating scale ranges; five-point school grading system (A B C D E); seven-point numerical rating (1 2 3 4 5 6 7); verbal scale (excellent, good, average, fair, poor). • Interval and ratio scale: interval scales of measure are used to express numerical information on a scale with equal distance
SL316XCh10Frame Page 225 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
225
between categories, but no absolute zero. For example: temperature (°F and °C), a dial gauge sitting on top of a gauge block, comparison of differences, etc. • Ratio scales of measure are used to express numerical information on a scale with equal distance between categories, but with an absolute zero in the range of measurement. A tape measure, ruler, position vs. time at constant speed, etc. Selecting statistical techniques. There are statistical techniques to cover all combinations of data types. At this point it is recommended to introduce the computer software and explain some of the key issues. Emphasis should be given to the following items: basic functions and capabilities as well as pull-down menus of the software program; cutting and pasting from other programs; random number generators and basic statistical tools and graphical applications. Transfer data (back and forth) from Excel, Text, ASCII, Dbase files etc.; importing data from frequent sources that original data may be found; graphics, control charts, DOE, ANOVA, t test, poison distribution, z score, capability, etc. Basic statistics and probability distributions • Plot the data • Histogram • Samples and populations • Measures of location • Sample mean • Sample median • Mean and median • Measures of spread • Measures of variation • Standard deviation • Degrees of freedom • Basic statistics • Additive property of variances • Accuracy and precision • Variation and specification • Defects per million Probability • Probability density function • Z transformation • Standard deviation • Empirical rule • Normal distribution • Normal probability plots • Central limit theorem • Sampling distribution of the mean • Attribute or variable • Types of data
SL316XCh10Frame Page 226 Monday, September 30, 2002 8:09 PM
226
Six Sigma and Beyond: The Implementation Process
Process capability and performance • Process characterization • Converting DPM to a Z value • Changing requirements • Short-term vs. long-term • Pooled vs. total variation • Which standard deviation? • Rational subgroups • How much shift should we expect? • Short-term vs. long-term (1.5 shift: Z shift) • Z tables • Process capability • Data collection plan Measurement system analysis • What is MSA? • Measurement tools • A simple gauge • Calibration • Resolution • Control • Bias • Accuracy vs. precision • Linearity • Stability • Calibration • Consistency • Gauge R&R
WEEK 2 KEY QUESTIONS • • • •
FROM WEEK
1
Process performance metrics CP and PP; CPK and PPK When to add/subtract ZSHIFT Project-related questions • Process capability • Process statistics • Product performance • Product benchmarks • Multi-vari charts and multi-vari studies. The purpose of these charts is to narrow the scope of input variables-leverage KPIVs. (Identify inputs and outputs.) • Tools to identify and analyze inputs/outputs • Multi-vari studies roadmap
SL316XCh10Frame Page 227 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
227
• Data collection • Interpretation: noisy data — outliers • Preliminary conclusions Standard error of the mean • What happens as sample size increases? • Confidence intervals: statistics such as the mean and standard deviations are only estimates of the population mus and sigmas and are based on only one sample. Because there is variability in these estimates from sample to sample, we can quantify our uncertainty using statistically based confidence intervals. Most of the time, we calculate 95% confidence intervals (CIs). According to conventional the interpretation, approximately 95 out of a 100 CIs will contain the population parameter, i.e., it is 95% certain that the population parameter is inside the interval. • Comparison of histograms • Parametric confidence intervals: the parametric confidence intervals assume a t-distribution of sample means and use this to calculate CIs. (The t-distribution is a family of bell-shaped distributions that are dependent on sample size. The smaller the sample size, the wider and flatter the distribution.) Hypothesis testing introduction • Why learn hypothesis testing: hypothesis testing is a stepping stone to ANOVA and DOE. However, statistics are not a substitute for professional judgment. Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled. Statistical testing provides objective solutions to questions that are traditionally answered subjectively.) • Hypothesis testing description • The null and alternate hypotheses • The method and the roadmap • Hypothesis testing answers the practical question of whether there is a real difference between _____ and _____ . Because we use relatively small samples to answer questions about population parameters, there is always a chance that we selected a sample that is not representative of the population. Therefore, there is always a chance that the conclusion obtained is wrong. With some assumptions, inferential statistics allows us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (p-Value) of a wrong conclusion. • What is hypothesis testing? • Formulating hypotheses • Tests of significance
SL316XCh10Frame Page 228 Monday, September 30, 2002 8:09 PM
228
Six Sigma and Beyond: The Implementation Process
• Significance level. Significance level, α (alpha) = .90. This alpha level requires two things: a) an assumption of no difference (Ho) and b) a reference distribution of some sort. • Producer’s vs. customer’s risk What is signal-to-noise ratio Steps in hypothesis testing Managing change • A challenge of leadership • Change management process: a demonstrated management commitment to making change happen. Must be visible, active, and command public support for accelerating change; willing to take personal initiative and support others’ initiatives in changing status quo; risk-taking, self-confident, and empowered behavior exhibited by individuals at multiple levels in the organization. Must be willing to identify and resolve conflicts and paradoxes inherent in change process. (Here, review some of the issues on leadership in Volume 1 and the requirements for executives and champions — Chapters 7 and 8.) An introduction to graphical methods • Pareto • Histogram • Run chart • Scatter plot • Correlation vs. causality • Boxplot • Comparison of means (single, two, and paired comparisons) • T distribution • Hypothesis testing for attribute data • Hypothesis tests: proportions • Hypotheses concerning one proportion • Hypotheses concerning two proportions • Chi-square test for independence • Chi-squared test • Chi-square test for a relationship ANOVA Analysis of Variance
WEEK 3 Key questions from first and second week Questions and review about the project Introduction to design of experiments • What is experimental design? Organizing the way in which one changes one or more input variables (Xs) to see if any of them, or any combination of them, effects the output (Y) in a significant way. A well-designed experiment eliminates the effect of all possible Xs except the ones that you changed. Typically, if the output variable changes
SL316XCh10Frame Page 229 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
229
significantly, it can be tied directly to the input X variable that was changed and not to some other X variable that was not changed. The real power of experimentation is that sometimes we get lucky and find a combination of two or more Xs that make the Y variable perform even better! DOE as a proactive tool. There is no such thing as a bad experiment — only poorly designed and executed ones. Not every experiment will produce major results, but most will provide information. Something will always be learned. New data prompt new questions and generate follow-up studies. • Four phases of designed experiments: • Planning: careful planning involves clearly defining the problem of interest, the object of the experiment, and the environment in which the experiment will be carried out. • Screening: initial experiments aim to reduce the number of potentially influential variables to a vital few. Screening allows us to focus process improvement efforts on the most important variables. (Screening designs include two-level full and fractional factorials, general full factorials, and Plackett-Burman.) • Optimization: after we have identified the vital few variables by screening, we need to determine the best values in order to optimize a process; for example, we may want to maximize a yield or reduce product variability. Optimization designs include full factorial designs (two-level and general) and response surface designs (central composite and Box-Behnken). • Verification: we can perform a follow-up experiment at the predicted best process conditions to confirm optimization results. • Why not one factor at a time? • Benefits of DOE • Types of experiments • One at a time • Full factorial • Partial factorial • Classes of DOE • Classical • Taguchi • Terms used in DOE • Main effects and interactions • Contrasts • Yates standard order • Run order for a DOE • Orthogonal arrays • Randomization • Repetition vs. replication • Strategy of experimentation • Team effort • Focus on ideal function
SL316XCh10Frame Page 230 Monday, September 30, 2002 8:09 PM
230
Six Sigma and Beyond: The Implementation Process
• Barriers to effective experimentation • Transactional Six Sigma. Focus on the X–Y relationship • 2-K factorials. (In two-level designs, there is a risk of missing a curvilinear relationship. Inclusion of center points is an efficient way to test for curvature without adding a large number of extra runs.) • Advantages of 2k factorials • Standard order of 2k designs • Looking at interactions • Interactions — interpretation • Cube plots • Types of 2k factorials • Center points and blocking • Adding center points • Cube plot for bubbles • Blocking experiments • Blocking with 2k factorials • One observation per treatment combination • Usually low statistical power • Use normal plots and Pareto analysis instead of F-tests • Back out higher order combinations • More than one observation per treatment combination • Better estimates of error • Confounding and blocking • Residuals analysis • Residuals Why Use Screening Experiments? • Screening designs: these designs are a powerful tool at analyzing multiple factors and interactions. The designs combine the flexibility of reduced run size without compromising information. One word of caution: do not reduce the experiment too far. By doing fewer runs, you may not obtain the desired level of information. Key features: 1. Two-level factorials — resolution IV, V, or higher. 2. General full factorials. 3. These allow estimation of at least 2-way interactions. 4. They can model weak curvature through center points and can be built up into a response surface (blocked central composite) design to model more pronounced curvature. 5. They provide direction for further experimentation in search of an optimal solution. (Recommendation: this is the design most often used in industry. They are good, low-cost, all-purpose designs.) • Factorial experiments — key features: 1. Know which resolution you are running: always two-level factorials. 2. Useful to estimate main effects mostly (not interactions). 3. They can be built up to a higher-order blocked factorial design. 4. Limited to 15 runs.
SL316XCh10Frame Page 231 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
231
5. Don’t expect more than what the design will provide. (Recommendation: Use these designs when you need to narrow down the list of important factorials. They are easy to interpret and cost effective.) • The success of fractional factorials is based on main-effects, and lowerorder interactions are generally the key factors. Full factorials can usually be derived from a fractional factorial experiment once nonsignificant factors are eliminated. • Half-fraction • Design resolution • Choosing a design • Notation • Alias structure • Designing a fractional factorial • Planning of designed experiments • Response surface designs: to model responses that exhibit quadratic (curvilinear) relationships with the factors. Key features: 1. Recommended for nonsequential experiments. (Only one shot!) 2. Use when extreme combinations cannot be run. 3. Excellent for optimizing since curvature is typically seen around optima. 4. Designs are costlier (more runs). Factors of interest should be low in number. 5. These can be used to minimize variation. 6. These can be used to put the process on target, maximize, or minimize a measure of interest. Implement solutions • Understand the current process. • What is the baseline capability? • Is your process under statistical control? • Is the measurement system adequate? • Factor selection. • Which factors (KPIVs) do we include? • Where should they come from? Process map • Cause-and-effect matrix • FMEA • Multi-vari study results • Brainstorming (fishbone) • Process knowledge • Operator experience • Customer/supplier input • Level selection. After the test factors are identified, we must now set the levels of those factors we want to test. Factors to consider: • What is the objective of the experiment? • What is the right level differentiation to obtain the information needed.
SL316XCh10Frame Page 232 Monday, September 30, 2002 8:09 PM
232
Six Sigma and Beyond: The Implementation Process
• • •
• •
•
• • • • • • • • • • • • •
• If the levels are too wide or narrow nothing will be gained. (Level guideline: 20% above and below the specs. If no specs, +/–3 sigma from the mean.) Sequential experimentation Select experimental design Screening/fractional factorial. (Full factorial — sample plan: how many runs can we afford? The more runs (samples), the better understanding and confidence in the result; how are we controlling noise and controllable variables that we know about?) Planning considerations What is the objective of the experiment? Have the response and independent variables been identified? Are the levels for the independent variables appropriate and applicable? What will the experiment cost? Have budgets and timelines been approved? General advice: the best time to design an experiment is after the previous one is finished. Do not try to answer all the questions in one study. Rely on a sequence of studies. Use two-level or screening designs early. Spend less than 25% of budget on the first experiment. A final report is a must!! What is our plan for randomization? Is repetition or replication better? Are all of the necessary players involved (informed)? How long will it take? How are we going to analyze the data? Have we planned a pilot run and walked through the process? Has the necessary paperwork been completed? Experiment request (appropriate authorization forms) Make sure the MSA has been validated Uncontrolled: blocking and randomization Noise: how are they to be controlled? Overview of design and procedures Team members for each phase Final report • Suggested items: • Executive summary • Problem statement and background • Objectives • Response variables • Independent variables (controlled and uncontrolled with measurements) • Experimental design • DOE process flow • Results and data analysis • Conclusions • Appendices (suggestions) • Detailed data analysis
SL316XCh10Frame Page 233 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• •
233
Original data if practical Details on instrumentation or procedures
WEEK 4 Week 1 review Week 2 review Week 3 review Introduction to week 4 Questions/concern about material of previous weeks and the project Transition project into control and continue to sustain the gain Develop your contract with impacted process Tools to assure process remains in control Keys to success • Early involvement of all work cell/department members • Update all affected parties (including supervisors/managers regularly) • Get buy-in — no surprises! • Poka-yoke the process • Establish frequent measurement • Establish procedures for the new/updated process • Train everyone — assign responsibilities • Monitor the results • How do I transition my project? • Assure your project is complete enough to transition • No loose ends — a plan (project action plan) for everything not finalized • Start early in your project to plan for transitioning • Identify team members at start of project • Remind them they are representatives of a larger group • Communicate regularly with people in impacted area • Display your project in impacted area during all phases. Remember, no surprises • Hold regular updates with impacted area, assuring their concerns are considered by your team. When possible, get others involved to help • Data collection • Idea generation (brainstorming events) • Project action plan. What is a project action plan anyway? It is a documented communication tool (contract) that allows you to identify a) what is left to do to complete your project, b) who is responsible to carry out each task, c) when they should have it complete, and d) how it should be accomplished. • Why do we need a project action plan? To ensure all improvement actions that are not yet completed are ultimately done. • Do I have to have one? Only if there are unfinished tasks to your improvement process that you expect others to carry out after the transition. (The tasks must be negotiated and agreed to.)
SL316XCh10Frame Page 234 Monday, September 30, 2002 8:09 PM
234
Six Sigma and Beyond: The Implementation Process
•
• • • • •
• • • • •
•
Who will monitor the plan for implementation/completion? Both you and the responsible supervisor or manager who assumes ownership. • Who has ultimate responsibility? The owner of each task and the responsible supervisor or manager. Sustaining the gain Product changes Revise drawings by submitting engineering action requests (EARs) Work with process, test and product engineers Process changes • Physically change the process flow (5S the project area). • Develop visual indicators. • Establish/buy new equipment to aid assembly/test. • Poka-yoke wherever possible including forms/worksheets. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance (QA) of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). Build into process the posting of key metric updates. Make it part of someone’s regular job to do in a timely fashion. Make it someone’s job to review the metric and take action when needed. Training — train everyone in the new process (don’t leave until there is full understanding). 5S workplace organization — to ensure your gains are sustainable, you must start with a firm foundation. 5S standards are the foundation that supports all the phases of Six Sigma manufacturing. The foundation of a production system is a CLEAN and SAFE work environment. Its strength is dependent upon employee commitment to maintaining it and management to actively supported. • 5S standards • Standardized work • Prerequisites for standardized work • Standardized work flow About control plans • Benefits to developing and implementing CPs — improves overall quality by reducing chances of quality excursions; reduces shrinkage or defects in MFG/transaction processes by keeping processes centered; data aid in timely troubleshooting of MFG/transaction processes; communication vehicle for changes to CTQ characteristics, control methods, etc. • Purpose of control plans: • The control plan provides a written summary description of the system for controlling parts and processes.
SL316XCh10Frame Page 235 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
•
235
The control plan is used to minimize process and product variation. • The control plan describes the actions that are required at each phase of the process — including receiving, in-process, final assembly and shipping — to assure that all process outputs will be in a state of control. • A control plan for operational actions, such as ordering, order taking, invoicing, billing, etc., can also be utilized for transactional operations. • The control plan does not replace the information contained in detailed operator instructions. • Since processes are expected to be continually updated and improved, the control plan is a living document, reflecting the current methods of control and measurement systems used. Developing a control plan • A basic understanding of the process must be obtained. Establish a multifunction team to gather and utilize appropriate available information, such as: • Process flow diagram • Failure mode and effects analysis (process and design) • Special characteristics (critical and significant characteristics) • Control plans and lessons learned from similar parts or processes • Team’s knowledge of the process • Technical documentation (design/process notices, MPIs, PM) • Validation plan results (DVP, EVP, PVP) • Optimization methods (QFD/DOE) • Develop the process flow diagram: • Map the process. • Develop the process FMEA (PFMEA). Examine each process operation for potential problems and failures. Focus on characteristics that are important to the customer and to product safety. A PFMEA should be required for all new product processes. PFMEAs must eventually be developed for all existing product lines. If a PFMEA does not exist, then customer concerns and complaints must be considered when developing the control plan. • Develop a preliminary standardized format. • Conduct multi-functional team review for revision/consensus of the format. • Install the format with change control approval. This will assign/show a Document Number, version number, issue date, and owner. • Implement the format. • Update/revise manufacturing process instruction’s, control charts, gauge systems, etc. as required from the new control plan. Control phase • Quality system overview • Process control system
SL316XCh10Frame Page 236 Monday, September 30, 2002 8:09 PM
236
Six Sigma and Beyond: The Implementation Process
• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •
Process control actions Quality system structure Evaluating the environment Elements of process control Functional quality system Continual SPC tools The behavior of processes Sources of variation The foundation of SPC Types of control charts Basic components of a control chart Link between control limits and hypotheses testing Interpreting control signals Variable control charts Rational subgrouping Control chart — nonnormal distribution Planning and managing SPC and SPM Attribute control charts Discrete SPC tools Types of control charts for discrete data Control chart interpretation Alternative methods of control Precontrol Precontrol charting Process capability estimate Zone control charting Poka-yoke (mistake-proofing, error-proofing) Types of mistake-proofing Errors vs. defects Types of human errors “Red flag” conditions Control/feedback logic Guidelines for mistake-proofing Mistake-proofing strategies Maintenance: a reliability function encompassing all transactional, information systems, and production equipment. Maintenance function should be linked to customer CTQs and it should address all six Ms: manpower, procedures, policies, place, environment, and measurements. Maintenance can and should be a reliability function and not just a repair function for it maximizes output, minimizes cost, and assures continued operation and, hence, customer satisfaction. Therefore, an organization must have a maintenance integrated strategy. This means that there must be a preventive maintenance program in any organization. • Realistic tolerancing: a simple graphical method for establishing optimum levels and appropriate tolerances for INPUTs. Once it is
SL316XCh10Frame Page 237 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
237
determined that a continuous output depends linearly on a continuous input, the output specification is used to create an input specification. Scatter plots and fitted line plots demonstrate association of inputs and outputs, not necessarily cause and effect. Gauge/measurement and control Maintenance and calibration log
TECHNICAL TRAINING — 4 WEEKS WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Vision • Philosophy • Aggressive goal • Metric (standard of measurement) • Benchmark • Method • Vehicle for: • Customer focus • Breakthrough improvement • Continual improvement • People involvement • Defining the goals of the business • Defining performance metrics that tie to the business goals • Identifying projects using performance metrics that will yield clear business results
SL316XCh10Frame Page 238 Monday, September 30, 2002 8:09 PM
238
Six Sigma and Beyond: The Implementation Process
• Applies advanced quality and statistical tools to achieve breakthrough financial performance • Goal • Performance target • Problem-solving methodology The Strategy • Which business function needs it? • Leadership participation. Six Sigma only works when leadership is passionate about excellence and willing to change. • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough phases • Define • Measure • Analyze • Improve • Control The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? COPQ Data • Getting there through inspection • Six Sigma overview • Overall perspective • Manufacturing process picture • Defects and variation • Variation and process capability Process capability and improvement • The defect elimination system • Overall perspective • Defects and the hidden factory • Rolled throughput yield vs. first-time yield • What causes defects? • Excess variation due to: • Manufacturing processes • Supplier (incoming) material variation • Unreasonably tight specifications (tighter than the customer requires) • Dissecting process capability
SL316XCh10Frame Page 239 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
239
Premise of Six Sigma sources of variation can be: • Identified • Quantified • Eliminated or controlled How do we improve capability? • Six Sigma, metrics and continual improvement Six Sigma is characterized by: • Defining critical business metrics • Tracking them • Improving them using proactive process improvement • Continual improvement: defects per unit (DPU) drives plant-wide improvement. Defects per million opportunities (DPMO) allows for comparison of dissimilar products • Calculating the product sigma-level — sigma level allows for benchmarking within and across companies • Metrics: Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt = e-dpu) (Cost of poor quality and cycle time or throughput are two others.) Process steps, FTY and RTY Harvesting the fruit of Six Sigma PPM conversion chart Translating needs into requirements • Implementation of Six Sigma success factors • Directly affects quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Ensures a predictable factory Black belt execution strategy To introduce roles and responsibilities and to describe the execution strategy • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt in relationship to • Delivering successful projects using the breakthrough strategy • Training and mentoring the local organization on Six Sigma Roles of a black belt • Mentoring: • Cultivate a network of experts in the factory or on site • Work with the operators • Work with the process owners • Work with all levels of management • Teaching and coaching: • Provide formal training to local personnel regarding new tools and strategies • Become the conduit for information
SL316XCh10Frame Page 240 Monday, September 30, 2002 8:09 PM
240
Six Sigma and Beyond: The Implementation Process
• Provide one-on-one support • Develop effective teams • Identifying and discovering: • Find new applications • Identify new projects • Uncover new business opportunities • Connect the business through the customer and supplier • Seek best practices • Being involved: • Sharing best practices throughout the organization • Being a spokesperson to the customer • Driving supplier performance • Getting involved with executive management • Becoming a future leader Prerequisites for black belts • Breakthrough strategy training • Black belt instruction • Roles of the master black belt Role of executives: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma Roles of the master black belt • Be the expert in tools and concepts • Develop and deliver training to various levels of the organization • Certify black belts • Assist in the identification of projects • Coach and support BB in project work • Participate in project reviews to offer technical expertise • Partner with the champions • Demonstrate passion around Six Sigma • Share best practices • Take on leadership of major programs • Develop new tools or modify old tools for application • Understand the connection between Six Sigma and the business strategy Role of champion: • Will select black belt projects consistent with corporate goals • Will drive the implementation of Six Sigma through public support and removal of barriers Role of green belt: • Will deliver successful localized projects using the breakthrough strategy Six Sigma instructor: • Will make sure all black belt candidates are certified in the understanding, usage, and application of Six Sigma tools BB execution strategy. The purpose of the BB execution strategy is to ensure sources of variation in manufacturing and transactional processes that are appropriately and
SL316XCh10Frame Page 241 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
241
objectively identified, quantified, and controlled or eliminated. By using the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans. The goal, of course, is to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity–productivity. To reach this goal, BBs use the DMAIC model, the Kano model, QFD, and other tools and methodologies. The phases of process improvement are: • The define phase • Refine the project • Establish the “as is” process • Identify customers and CTQs • Identify goals and scope of project • Simple QFD (quality function deployment) tools used to emphasize the importance of understanding customer requirements, are the CTs Critical Tos — CTCost, CTDelivery, CTQuality. The tools relate the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. The expected result is a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs — critical to the process —which are anything that we can control or modify about our process that will help us achieve our objectives. • The Measurement Phase. Establish the performance baseline. A welldefined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is defects. This holds true for any problem statement, objective, and metric (% defects, overtime, RTY, etc.). • Primary metric — a black belt needs to be focused; if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. • There are times when you may achieve your objective, yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order. • Purpose of measurement phase • Define the project scope, problem statement, objective, and metric • Document the existing process (using a process map, C&E matrix, and a FMEA) • Identify key output variables (Ys) and key input variables (Xs) • Evaluate measurement system for each key output variable • Establish baseline capability for key output variables (potential and overall)
SL316XCh10Frame Page 242 Monday, September 30, 2002 8:09 PM
242
Six Sigma and Beyond: The Implementation Process
• • •
Document the existing process Critical to matrix (cause and effects matrix) Establish data-collection system. Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place you will not be able to determine whether you are making any improvements in your project. Establish this system such that you can historically record the data you are collecting. This information should be recorded in a database that can be readily accessed. The data should be aligned in the database in such a manner that for each output (Y) recorded the operating conditions (X) are identified. This becomes important for future reference. This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. • Measurement systems analysis: to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. It is very important to make the point that we are not evaluating part variability, but the gauge operator capability. Some guidelines are: • Determine the measurement capabilities for Ys • Need to be completed before assessing capability of Ys • These studies are called: • Gauge repeatability and reproducibility (GR&R) studies • Measurement systems analysis (MSA) • Measurement systems evaluation (MSE) • Indices: precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error (P/T ≤ 10% is desirable; P/T = 30% is marginal.) Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. • Capability studies: used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and long-term (overall) estimates of capability indices are taught. Indices used assuming process is centered: Cp, Pp, and Zst; indices used to evaluate shifted process: Cpk, Ppk, and Zlt. The analysis phase. Identify vital few Xs by identifying high-risk input variables (xs) from the failure modes and effects analysis (FMEA); to reduce the number of process input variables (xs) to a manageable number via hypothesis testing and ANOVA techniques; to determine the presence and potential elimination of noise variables via multi-vari studies; to plan and document initial improvement activities. • Failure modes and effects analysis • Documents effects of failed key inputs (xs) on key outputs (Ys). • Documents potential causes of failed key input variables (xs).
SL316XCh10Frame Page 243 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
243
• Documents existing control methods for preventing or detecting causes. • Provides prioritization for actions and documents actions taken. • Can be used as the document to track project progress. • Multi-vari studies: study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is to identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase and to take a first look at major input variables. Ultimately, multi-vari studies help select or eliminate variables for study in designed experiments. • The improvement phase. Determine the governing transformation equation through understanding the ideal function. The backbone of the process improvement is DOE (design of experiments). From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). This phase is characterized by a sequence of experiments, each based on the results to the previous study. Critical variables are identified during this process. Usually three to six Xs account for most of the variation in the outputs. Ultimately, the purpose of this phase is to control and focus on the continual improvement process. • The control phase: optimize, eliminate, automate, or control vital few inputs; document and implement the control plan; sustain the gains identified; reestablish and monitor long-term delivered capability; implement continual improvement efforts (green belts at the functional area); execution strategy support systems; safety requirements; maintenance plans defined; system to track special causes; required and critical spare parts list; trouble shooting guides; control plans for both short and long term; SPC charts for process monitoring; inspection points and metrology control; workmanship standards; and others? Potential project deliverables • Define: • Identification of customers • Identification of customers’ needs • Identify the “as is” process • Formulate the goal and scope of the project • Update the project charter • Measure: • Project definition: • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones • Data-collection system • Measurement systems analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, sigma level, DPU, RTY)
SL316XCh10Frame Page 244 Monday, September 30, 2002 8:09 PM
244
Six Sigma and Beyond: The Implementation Process
• •
•
•
• •
•
• •
• •
•
• Graphical and statistical tools: • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review Analyze Project definition: • Problem description • Project metrics Passive process analysis: • Graphical analysis • Multi-vari studies • Hypothesis testing • DOE planning sheet • Updated PFMEA Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review Improve Project definition: • Problem description • Project metrics Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) Updated PFMEA Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review Control Project definition: • Problem description • Project • Optimization of Ys (RSM/EVOP) • Monitoring Ys • Eliminating or controlling Xs Sustaining the gains: • Updated PFMEA • Process control plan • Action plan
SL316XCh10Frame Page 245 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
245
• Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review The classical perspective of yield Simple first-time yield = traditional yield Measuring first-pass yield Rolled-throughput yield Normalized yield Complexity. Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as an “opportunity count.” In terms of quality, each product or process characteristic represents a unique “opportunity” to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar. Hidden factory DPMO • Nonvalue-add rules: no opportunity count should be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count, either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception is an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. • Supplied components rules: each supplied part provides one opportunity. Supplied materials such as, machine oil, coolants, etc. do not count as supplied components. • Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. • Sanity check rule: will applying counts in these operations take my business in the direction it is intended to go? If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be counter to the company objective. Hence, it would not provide an opportunity. Once you define an “opportunity,” however, you must institutionalize that definition to maintain consistency. Introduction to software package used. The instructor should provide information about the software at least in the following areas.
SL316XCh10Frame Page 246 Monday, September 30, 2002 8:09 PM
246
Six Sigma and Beyond: The Implementation Process
• Purpose of using the software • Capabilities of the software • CUT and PASTE • Formatting data • Numeric vs. alpha columns • Date columns • Entering data • Graphing • Basic statistics • Help menu • Normality testing • ANOVA • Z scores • Creating random data Basic Statistics • Mean • Median • Normal distribution • T test • Z test Fundamentals of improvement • Variability: is the process on target with minimum variability? We use the mean to determine if process is on target. We use the standard deviation (σ) to determine spread. • Stability: How does the process perform over time? Stability is represented by a constant mean and predictable variability over time. If process is not stable, identify, and remove causes (Xs) of instability (obvious nonrandom variation). Determine the location of the process mean. Is it on target? If not, identify the variables (Xs) that affect the mean and determine optimal settings to achieve target value. Estimate the magnitude of the total variability. Is it acceptable with respect to the customer requirements (spec limits)? If not, identify the sources of the variability and eliminate or reduce their influence on the process. • Can we tolerate variability? Even though there will always be variability present in any process, we can tolerate variability if: a) the process is on target, b) the total variability is relatively small compared to the process specifications, and c) the process is stable over time. Types of outputs (data) • Attribute data (qualitative) • Variable data (quantitative) • Discrete (count) data • Continuous data Selecting statistical techniques. There are statistical techniques available to analyze all combinations of input/output data.
SL316XCh10Frame Page 247 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
247
Statistical distributions. We can describe the behavior of any process or system by plotting multiple data points for the same variable over time, across products on different machines, etc. The accumulation of these data can be viewed as a distribution of values represented by: • Dot plots • Histograms • Normal curve or other “smoothed” distribution Population parameters vs. sample statistics • Population: an entire group of objects that have been made or will be made containing a characteristic of interest. Very likely we will never know the true population parameters • Sample: the group of objects actually measured in a statistical study. A sample is usually a subset of the population of interest • Measures of central tendency: median, mean, and mode • Measures of variability: • Range — numerical distance between the highest and the lowest values in a data set. • Variance (σ;2 s2) — the average squared deviation of each individual data point from the mean. (Emphasize that variances add up. In fact, variances of the inputs add up to calculate the total variance in the output.) • Standard deviation (σ; s) — the square root of the variance; it is the most commonly used measurement to quantify variability. (Emphasize that standard deviations do not add.) The normal distribution — a distribution of data which has certain consistent properties. These properties are very useful in our understanding of the characteristics of the underlying process from which the data were obtained. Most natural phenomena and man-made processes are distributed normally, or can be represented as normally distributed. • Property 1: a normal distribution can be described completely by knowing only the mean and standard deviation. • Property 2: the area under sections of the curve can be used to estimate the cumulative probability of a certain “event” occurring. • Property 3: the previous rules of cumulative probability closely apply even when a set of data is not perfectly normally distributed. • Testing normality • Normal probability plots • Chi-squared • F test Data set • Mining the data • Test data for normality • Conduct appropriate testing and analyses
SL316XCh10Frame Page 248 Monday, September 30, 2002 8:09 PM
248
Six Sigma and Beyond: The Implementation Process
Capability analysis • The need for capability • Types of capability analysis • Variable output • Attribute output • The method • Long vs. short • Indices of capability • Z shift • Conversion from short- to long-term and vice versa • Additional capability topics • Box-Cox transformation • Nonnormal data (transformable) • Nonnormal data (nontransformable) Attribute measurement system: a measurement system that compares each part to a standard and accepts the part for which this standard is met. • Screen: 100% evaluation of product using inspection techniques (an attribute measurement system). • Screen effectiveness: the ability of the attribute measurement system to properly discriminate good from bad. • Customer bias: operator has a tendency to hold back good product. • Producer bias: operator has a tendency to pass defective product. Purpose of attribute R&R • To assess your inspection or workmanship standards against your customers’ requirements. • To determine if inspectors across all shifts, all machines, etc. use the same criteria to determine “good” from “bad”. • To quantify the ability of inspectors to accurately repeat their inspection decisions. • To identify how well these inspectors are conforming to a “known master,” which includes: • How often operators decide to ship truly defective product • How often operators do not ship truly acceptable product • Discover areas where: • Training is needed • Procedures are lacking • Standards are not defined Attribute R&R — the method Variable gauge R&R • The ideal measurement system will produce “true” measurements every time it is used (zero bias, zero variance). • The study of measurement systems will provide information as to the percent variation in your process data that comes from error in the measurement. • It is also a great tool for comparing two or more measurement devices or two or more operators against one another.
SL316XCh10Frame Page 249 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
249
• MSE should be used as part of the criteria required to accept and release a new piece of measurement equipment to manufacturing. • It should be the basis for evaluating a measurement system that is suspect of being deficient. Possible sources of process variation Precision vs. accuracy Basic model Sources of measurement variation • Knowledge to be obtained • How big is the measurement error? • What are the sources of measurement error? • Is the tool stable over time? • Is the tool capable for this study? • How do we improve the measurement system? Accuracy-related terms • Accuracy — extent to which the average of the measurements deviate from the true value; the difference between the observed average value of measurements and the master value. The master value is an accepted, traceable reference standard (e.g., NIST). • True value — theoretically correct value (NIST standards). • Bias average of measurements is different by a fixed amount, effects include: • Operator bias — different operators get detectably different averages for the same measurements on the same part. • Machine bias — different machines get detectably different averages for the same measurements on the same parts. Precision-related terms • Precision — total variation in the measurement system. Measure of natural variation of repeated measurements. Typical terms associated with precision are: random error, spread, test/retest error. • Repeatability — the inherent variability of the measurement device. Variation that occurs when repeated measurements are made of the same variable under similar conditions, same part, same operator, same set-up, same units, same environmental conditions in the short-term. It is estimated by the pooled (average) standard deviation of the distribution of repeated measurements. Repeatability is usually less than the total variation of the measurement system. Another way of looking at it is to think of it as the variation between successive measurements of the same part, same characteristic, by the same person using the same instrument. Also known as test–retest error; used as an estimate of short-term measurement variation. • Reproducibility — the variation that results when different conditions are used to make the same measurements with different operators, different set-ups, different test units, different environmental conditions, and long-term measurement variation. It is estimated by the standard deviation of the averages of measurements from different
SL316XCh10Frame Page 250 Monday, September 30, 2002 8:09 PM
250
Six Sigma and Beyond: The Implementation Process
measurement conditions. Another way of saying this is the difference in the average of the measurements made by different persons using the same or different instrument when measuring the identical characteristic on the same part. • Linearity — a measure of the difference in accuracy or precision over the range of instrument capability. • Discrimination — the number of decimal places that can be measured by the system. Increments of measure should be at least one tenth of the width of the product specification or process variation. • Stability (over time) — the distribution of measurements remains constant and predictable over time for both mean and standard deviation. No drifts, sudden shifts, cycles, etc. To ensure stability make sure you monitor and analyze control charts. • Correlation — a measure of linear association between two variables, e.g., two different measurement methods or two different laboratories. P/T and P/TV • “P to PV” is used to qualify a measurement system as capable of measuring to the total observed process variation. • “P to T” is used to qualify a measurement system as capable of measuring to a given product specification. Uses of P/T and P/TV (percent R&R) • The P/T ratio is the most common estimate of measurement system precision. This estimate may be appropriate for evaluating how well the measurement system can perform with respect to the specification. Specifications, however, may be too tight or too loose. • Generally, the P/T ratio is a good estimate when the measurement system is used only to classify production samples. Even then, if process capability (Cpk) is not adequate, the P/T ratio may give you a false sense of security. • The P/TV (percent R&R) is the best measure for the black belt. This estimates how well the measurement system performs with respect to the overall process variation. Percent R&R is the best estimate when performing process improvement studies. Care must be taken to use samples representing full process range. • The method — calculating percent R&R • By operator — shows if any operator had higher or lower readings (on average) than the others. • By part — shows the ability of all operators to obtain the same readings for each part. Also shows the ability of a measurement system to distinguish between parts (amount of overlap). Gauge R&R, Xbar, and R chart Percent R&R vs. capability • Handling poor gauge capability. If a dominant source of variation is repeatability (equipment), you need to replace, repair, or otherwise adjust the equipment. If, in consultation with the equipment vendor or upon searches of industry literature, you find that the gauge technology
SL316XCh10Frame Page 251 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
251
that you are using is “state-of-the-art” and is performing to its specifications, you should still fix the gauge. One temporary solution to this problem is to use signal averaging. If a dominant source of variation is the operator (reproducibility), you must address this via training and definition of the standard operating procedure. You should look for differences between operators to give you some indication as to whether it is a training, skill, and/or procedure problem. If the gauge capability is marginal (as high as 30% of study variation) and the process is operating at a high capability (Ppk greater than two), then the gauge is probably not hindering you and you can continue to use it. • Controlling repeatability. Note: if you want to decrease your gauge error take advantage of the standard error square root of the sample. • The signal averaging technique uses other statistical indexes such as: • Effects of P/T and S/N ratios — small S/N increases the time before an out-of-control process is detected by a control chart (refer to Xbar and range). • The effect of P/T on Cpk — large P/T reduces the process Cpk from the true value to some smaller observed value. • The effect of P/T on part assessment — large P/T increases the probability that we will misclassify product as defective when it is really good and vice versa. • The effect of S/N ratio on control chart sensitivity. • The effect of the discrimination index — if the index equals two, only attribute data are available and sample sizes must be larger. If the index is five to ten, then discrimination is finer and sample sizes can be smaller. Calibration steps • Determine if the measurement system needs to be recalibrated. • Determine the minimum number of measurements needed to make this decision. • Take data and make decision. • If yes, recalibrate system. • Why not just recalibrate? • Normal variation causes the measurement to be slightly different each time it is used. • Recalibration should be done only when the measurements are off by more than the normal variation. • Recalibrating a system when it is not needed can increase the variability in the measurements. Measurement system evaluation questions • Written inspection measurement procedure? • Detailed process map developed? • Specific measuring system and set-up defined? • Trained or certified operators? • Instrument calibration performed in a timely manner? • Tracking accuracy?
SL316XCh10Frame Page 252 Monday, September 30, 2002 8:09 PM
252
Six Sigma and Beyond: The Implementation Process
• Tracking percent R&R? • Tracking bias? • Tracking linearity? • Tracking discrimination? • Correlation with supplier or customer where appropriate? Measurement system analysis questions • Have you picked the right measurement system? Is this measurement system associated with either critical inputs or outputs? • What do the precision, accuracy, tolerance, P/T ratio, percent R&R and trend chart look like? • What are the sources of variation and what is the measurement error? • What needs to be done to improve this system? • Have we informed the right people of our results? • Who owns this measurement system? • Who owns troubleshooting? • Does this system have a control plan in place? • What is the calibration frequency? Is that frequent enough? • Do identical systems match? Deliverables for Week 2 • Project report • Title page • Problem statement summary page • Problem statement • CTQ • What is the defect? • Initial DPMO • Target DPMO (i.e., 90% reduction/99% reduction stretch) • Team • Benefits of the project (why are we doing this?) • Picture/drawing to allow audience to set reference • Process flow diagram • Definition of the measurement system. What was defined as the measurement of study? How is this linked to the CTQ? • Measurement system validation • Show failures and what was learned • Show analysis and results • Initial capability study • Begin screening factors (C&E, FMEA, multi-vari) • Brief recap/summary • Next steps
WEEK 2 Review week 1 Questions and answers
SL316XCh10Frame Page 253 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
253
Project-related questions Open second week session with Six Sigma key questions • What are the key metrics that report on the capability of a process? • How are the key process metrics computed and how should they be displayed? • How do the key metrics relate to each other? • How should the key process metrics relate to the business metrics? • How can the key processes be converted to a base metric that facilitates pooling? • How can the key process metrics be used for troubleshooting? Process performance metrics • CP and PP • CPK and PPK When to add/subtract ZSHIFT Metric conversion Product benchmarks Controllable and noise variables Planning multi-vari studies — narrow the scope of input variables — leverage KPIVs. • Baseline process capability and stability and multi-vari studies. • Identify obvious assignable causes of variability (shift-to-shift, run-torun, operator-to-operator). • Provide direction and input for DOE activities. • Multi-vari analysis is a great graphical tool to help visual how various Xs (both controllable and noise) impact our response Y. • A graphical tool that, through logical subgrouping, analyzes the effects of categorical Xs on continuous Ys. • A method to characterize the baseline capability of a process while either in production mode or via historical data. (If in the production mode, the data used in a multi-vari study is collected for a relatively short period of time (2 weeks to 2 months), though the multi-vari study can continue until the full range of the output variable is observed (from low to high). Categorical Xs are typically used in multi-vari analysis.) Data collection and analysis. Create an appropriate data-sampling plan. Graphically analyze passive or historical data Identify Inputs and outputs. The following tools can be used to identify the inputs and outputs: • C&E matrix • FMEA • Fishbone • Short-term capability • Scatter plots • Correlation • Regression • Boxplots
SL316XCh10Frame Page 254 Monday, September 30, 2002 8:09 PM
254
Six Sigma and Beyond: The Implementation Process
• Main effects • Interaction plots • ANOVAs, t-tests Inputs that can be classified as attribute in nature. These types of inputs have levels assigned that are arbitrary in nature (operator A–operator B–operator C, or low–high, or machine1–machine2–machine3). We use the results to determine capability, stability, and potential relationships between Xs and Ys. • Classifying variables Controllable (variables we can do something about) and noise (variables we cannot do something about or we choose not to do anything about) • We can classify variables into three main families of variation: • Positional (within piece variation) • Variation within a single unit of production • Variation across a single unit containing many individual parts (e.g., semiconductor wafer with many parts) • Variation by position in a batch process (cavity-to-cavity within one mold shot) • Cyclical (consecutive piece to piece variation): • Variation among consecutive pieces or units of production • Variation among groups of pieces (cavity-to-cavity between consecutive mold shots) • Batch-to-batch differences (within consecutive batches) • Lot-to-lot differences (within consecutive lots) • Temporal (time to time variation): • Variation not accounted for by positional or cyclical • Shift-to-Shift, day-to-day, week-to-week, setup-to-setup Performing a multi-vari analysis Step 1: plan the multi-vari • Identify the major areas of variation. • Show them on the project organization chart. • Decide how to take data in order to distinguish these major sources of variation. • Decide ahead of time how to graph the data so that possible variation will be visible. Step 2: take data in order of production (not randomly). Data should include the entire range of variation. Step 3: take a representative sample (minimum of 3) per group. Step 4: analyze the results • Is there an area that shows the greatest source of variation? • Are there cyclic or unexpected non-random patterns of variation? • Are the nonrandom patterns restricted to a single sample or more? • Are there areas of variation which can be eliminated (e.g., shift-toshift variation)?
SL316XCh10Frame Page 255 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
255
Sampling Methods • Simple random sample — if all possible samples of n experimental units are equally likely, the procedure to use is a simple random sample. The characteristics of simple random sampling are: • Unbiased — every experimental unit has the same chance of being chosen • Independence — the selection of one experimental unit is not dependent on the selection of another • Stratified sample — divide the population into homogeneous groups and randomly sample from within each group. • Cluster sample — divides the sample into smaller homogeneous groups. Then the groups are randomly sampled. • Systematic sample — start with a randomly chosen unit and sample every kth unit thereafter. Sampling plan — a good sampling plan will capture all relevant sources of noise variability. A typical sample size rule of thumb is 30. • Lot-to-lot • Batch-to-batch • Different shifts, different operators, different machines, etc. Analyze the data using some of the techniques listed below: • Multi-vari chart • Attribute inputs • Box plots • Pareto • Histograms • Box plots by each variable • Marginal plots • Interval plots • Main effects plots • Interaction plots • Marginal plots • Scatter plots, correlation, regression Correlation and simple linear regression • Overview • Correlation — a measure of the strength of association between two quantitative variables (e.g., pressure and yield). It measures the degree of linearity between two variables assumed to be completely independent of each other. As useful as the correlation is, be careful in assuming causality. • Correlation coefficients — correlation coefficient, r, always lies between –1 and +1. • Correlation and causality • Scatter plots • Fitted line plots • Simple regression — regression line; best fitted line; model building • Comparison of covariances
SL316XCh10Frame Page 256 Monday, September 30, 2002 8:09 PM
256
Six Sigma and Beyond: The Implementation Process
The central limit theorem • Definition of the central limit theorem — if one pulls all possible random samples of size “n” from a population of individuals with known mean (µ) and standard deviation (σ), then the average of the sample means will be the population mean. In addition, the standard deviation of the sample averages will be approximated by the standard error of the mean. The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if “n” is sufficiently high (n > 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. • Significance of confidence intervals — statistics such as mean and standard deviation are only estimates of the population mus and sigmas and are based on only one sample. Because there is variability in these estimates from sample to sample, we can quantify our uncertainty using statistically-based confidence intervals (CIs). Most of the time, we calculate 95% CIs. This means that approximately 95 out of a 100 CIs will contain the population parameter, or we are 95% certain the population parameter is inside the interval. Population vs. sample • Comparison of histograms • Parametric CIs — the parametric CIs assume a t-distribution of sample means and use this to calculate CIs. What is the t-distribution? The tdistribution is a family of bell shaped distributions that are dependent on sample size. The smaller the sample size, the wider and flatter the distribution. • CI for the mean • CIs for proportions
HYPOTHESIS TESTING INTRODUCTION Hypothesis testing is a stepping stone to ANOVA and DOE. Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. (Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys.) To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled.)
PARAMETERS
VS.
STATISTICS
Hypothesis testing description — statistics communicate information from data; however, they are not a substitute for professional judgment. Quite often, statistical testing provides objective solutions to questions that are traditionally answered
SL316XCh10Frame Page 257 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
257
subjectively through the practical question of whether there is a real difference between _____ and _____. A practical process problem is translated into a statistical hypothesis in order to answer this question. In hypothesis testing, we use relatively small samples to answer questions about population parameters. Therefore, there is always a chance that we selected a sample that is not representative of the population and as a consequence, there is always a chance that the conclusion obtained is wrong. However, with some assumptions, inferential statistics allow us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P-value) of a wrong conclusion.
FORMULATING HYPOTHESES Tests of significance — significance level (α — alpha; and β — beta). Customary alpha significance values are 90, 95, or 99%. This alpha level requires two things: a) an assumption of no difference (Ho), and b) a reference distribution of some sort. Hypothesis testing roadmap
WEEK 3 Review week 1 Review week 2 General questions ANOVA review (one way, F test. The F-test is a signal-to-noise ratio such that the higher the F-test, the lower the probability that it will occur by chance. When there are only two levels, the results of the one-way ANOVA are identical to the t-test. The relationship is: F = t2.) Questions about the project The mathematical model for a one way ANOVA • Comments about single-factor designs — the output is generally measured on an interval or ratio scale (yield, temperature, volts, etc.). The output variable can be discrete or interval/ratio. The input variable is known as a factor. If the factor is continuous by nature, it must be classified into subgroups. For example, we could have a measure of line pressure from low to high values. We could do a median split and classify the factor into two levels, low and high. • Diagnostic testing — residual analysis — ANOVA assumes the errors are normally distributed with a mean = 0 and a constant sigma. We can test this by reviewing the residuals, which are comprised of each score subtracted from its sample mean. (In a computer statistical package, this is calculated by asking for it.) The significance of residuals is designated as the epsilon-squared. • Test of equal variance • ANOVA table • F-distribution
SL316XCh10Frame Page 258 Monday, September 30, 2002 8:09 PM
258
Six Sigma and Beyond: The Implementation Process
• • • • •
Main effects and interval plots Pooled standard deviation Homogeneity of variance Barriers to effective designed experiments Execution strategy
DOE defined: a systematic set of experiments that permits one to evaluate the effect of one or more factors without concern about extraneous variables or subjective judgments. It begins with the statement of the experimental objective and ends with the reporting of the results. It may often lead to further experimentation. It is the vehicle of the scientific method, giving unambiguous results that can be used for inferring cause and effect. Strategy of Experimentation • Define the problem • Establish the objective • Select the output — responses (Ys) • Select the input factors (Xs) • Choose the factor levels • Select the experimental design and sample size • Collect and analyze the data • Draw conclusions • Achieve the objective; the objective of all experimental studies is to determine: • The effects of material variation on product reliability. • The sources of variation in a critical process. • The effects of less expensive materials on product performance. • The impact of operator variation on the product. • The cause-effect relationships between process inputs and product characteristics. • The equation that models your process. Barriers to Effective Experimentation • Define the output variables — is output qualitative or quantitative? (Objective: centering, variation improvement, or both?) • What is the baseline (mean and sigma)? • Is output under statistical control? • Does output vary over time? • How much change in the output do you want to detect? • Is the output normally distributed? • Is the measurement system adequate? • Do you need multiple outputs? Problem statements include: • A complete and detailed description of the problem • Identifying and understanding of all operational definitions (relates to the problem and contains no solutions or conclusions)
SL316XCh10Frame Page 259 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
259
• As many specifics as possible • No causes Purpose and function include: • Clearly defined and quantified problem • Definition of the measurement source to be used • Identification of the negative effects of the current performance and their relationship to the customer CTQs Questions to establish the objective of the experiment • What do you want to discover by conducting the experiment? • Are you trying to establish the relationship between the input factors (Xs) and the output (response — Y)? • Are you trying to establish the vital few Xs from the trivial many (possible factors)? • Are you interested in knowing if several input factors act together to influence the output (Y)? • Are you trying to determine the optimal settings of the input factors? Selecting inputs (Xs) and the output (responses Y). Factor selection — typical tools for narrowing down the list are • FMEA/control plans or DCP • Cause-and-effect matrix • Multi-vari and hypothesis testing • Process mapping • Brainstorming • Literature review • Engineering knowledge • Operator experience • Scientific theory • Customer/supplier input • Global problem solving • Parameter design Choosing the levels for each factor • The levels of an input factor are the values of the input factor (X) being examined in the experiment (not to be confused with the output [Y]). To select the appropriate levels two items are of concern and serve as the basis for the definition of the levels; a) engineering knowledge and b) theoretical knowledge. • For a quantitative (variables data) factor like temperature: if an experiment is to be conducted at two different temperatures, then the factor temperature has two levels. • For a qualitative (attributes data) factor like cleanliness: if an experiment is to be conducted using clean and not clean, then the factor cleanliness has two levels. Guidelines for setting input variable levels • To determine vital few inputs from a large number of variables use screening experimentation.
SL316XCh10Frame Page 260 Monday, September 30, 2002 8:09 PM
260
Six Sigma and Beyond: The Implementation Process
• Set “bold” levels at extremes of current capabilities. If we vary the input to extremes, we will be assured of seeing an effect on the output, if there is one. Remember that this may exaggerate the variation or it may overlook the nonlinearity, if present. Once critical inputs are identified, reduced spacing of the levels is used to identify interactions among inputs. This approach usually leads to a series of sequential experiments. Response surface methods Full factorials with replication Full factorials with repetition Full factorials without replication or repetition Screening or fractional designs Ensuring internal and external validity • Internal validity. Randomization of experimental runs “spreads” the noise across the experiment. Blocking ensures noise is part of the experiment and can be directly studied. • Holding noise variables constant eliminates the effect of that variable but limits broad inferences. • External validity. Include representative samples from possible noise variables. • Threats to statistical validity • Low statistical power: sample size inappropriate. • Loose measurement systems inflate variability of measurements. • Random factors in the experimental setting inflate variability of measurement. • Randomization and sample size prevent threats. Planning questions • What is the measurable objective? • What will it cost? • How will we determine sample sizes? • What is our plan for randomization? • Have we talked to internal customers about this? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run? • Where is the proposal? • DOE worksheet Performing the experiment • Document initial information. • Verify measurement systems. • Ensure baseline conditions are included in the experiment. • Make sure clear responsibilities are assigned for proper data collection. • Always perform a pilot run to verify and improve data-collection procedures! • Watch for and record any extraneous sources of variation. • Analyze data promptly and thoroughly.
SL316XCh10Frame Page 261 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • •
261
Graphical Descriptive Inferential (Always run one or more verification runs to confirm your results; go from narrow to broad inference.) General advice • Planning sheet can be more important than running the experiment. • Make sure you have tied potential business results to your project. • Focus on one experiment at a time. • Do not try to answer all the questions in one study; rely on a sequence of studies. • Use two-level designs early. • Spend less than 25% of budget on the first experiment. • Always verify results in a follow-up study. • It is acceptable to abandon an experiment. • A final report is a must!! • Finally, push the envelope with robust levels, but think of the safety of the people and equipment. Factorial experiments • Purpose • To understand the advantages of factorial experiments vs. one factor at a time. • To determine how to analyze general factorial experiments. • To understand the concept of statistical interaction. • To analyze two and three full factorial experiments. • To use diagnostic techniques to evaluate the “goodness of fit” (residuals) of the statistical model. • To identify the most important or critical factors in the experiments. • Full factorial — one-factor-at-a-time (OFAT) and interactions. • Factorial experiments — advantages • Are more efficient than OFAT experiments. • Allow the investigation of the combined effects of factors (interactions). • Cover a wider experimental region than OFAT studies. • Identify critical factors inputs. • Are more efficient in estimating effects of both input and noise variables on the output. • 2k factorials • 3k factorials • General linear model (GLM) — What do we do when our full factorial design is unbalanced due to lost data or the inability to complete all of the experimental runs? This is not an issue, as we can use the GLM to analyze the results. • Analyzing interaction plots • Mixed models (fixed and random factors) permitted • ANOVA plus unbalanced or nested designs
SL316XCh10Frame Page 262 Monday, September 30, 2002 8:09 PM
262
Six Sigma and Beyond: The Implementation Process
• Full factorial experiments — typically used to optimize a process • Steps to conduct a full factorial experiment • Step 1: state the practical problem and objective using the doe worksheet. • Step 2: state the factors and levels of interest. • Step 3: select the appropriate sample size. • Step 4: create a computer experimental data sheet with the factors in their respective columns. Randomize the experimental runs in the data sheet. The software will create the factorial design. • Step 5: conduct the experiment. • Step 6: construct the ANOVA table for the full model (balanced or unbalance). • Step 7: review the ANOVA table and eliminate effects with pvalues above 0.05. Run the reduced model to include those pvalues that are deemed significant. • Step 8: analyze the residuals of the reduced model to ensure you have a model that fits. Calculate the fits and residuals for significance. • Generate model • Run verification experiment • Fractional factorials — used primarily for screening factors • Steps for conducting a fractional factorial • Steps 1–6 same as for full factorial • Step 7: analyze the residual plots to ensure we have a model that fits (This step was run in Step 5) • Step 8: investigate significant interactions (p-value < 0.05). Assess the significance of the highest order interactions first. For three-way interactions unstack the data and analyze. • Once the highest-order interactions are interpreted, analyze the next set of lower-order interactions. • Step 9: investigate significant interactions (p-value < 0.05). Evaluate main-effects plot and cube plots. • Step 10: state the mathematical model obtained. If possible, calculate the epsilon squared and determine the practical significance. • Step 11: translate the mathematical model into process terms and formulate conclusions and recommendations. • Step 12: replicate optimum conditions. Plan the next experiment or institutionalize the change. Taguchi experimentation • Loss function • Ideal function • P diagram • Orthogonal arrays (QAs) • Parameters (factors)
SL316XCh10Frame Page 263 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • • •
263
Noise (factors) Interaction Main effects Signal to noise Daniel plots
WEEK 4 Review week 1 Review week 2 Review week 3 General questions Questions, concerns about project Week 4 potential project deliverables • Project definition • Project metrics • Process optimization • PLEX, EVOP, RSM, multiple regression • Process controls • Statistical product monitors • Statistical process controls • Document and sustain the gains • Update FMEA • Update control plan • 5S the immediate project area • Quality manual and related documentation • Write the final report • Review of designed experiments
FRACTIONAL FACTORIALS Why do fractional factorial experiments? As the number of factors increases, so does the number of runs. 2 × 2 factorial = 4 runs 2 × 2 × 2 factorial = 8 runs; 2 × 2 × 2 × 2 factorial = 16 runs and so on. If the experimenter can assume higher-order interactions are negligible, it is possible to do a fraction of the full factorial and still get good estimates of low-order interactions. Major use of fractional factorials is screening: a relatively large number of factors in a relatively small number of runs. Screening experiments are usually done in the early stages of a process improvement project. Factorial experiments. Successful factorials are based on: a) the sparsity of effects principle and b) systems are usually driven by main effects and low-order interactions. Sequential experimentation Designing a fractional factorial
SL316XCh10Frame Page 264 Monday, September 30, 2002 8:09 PM
264
Six Sigma and Beyond: The Implementation Process
What is PLEX? PLEX = PLant EXperimentation; a process-improvement tool for online use in full-scale production; uses simple factorial two-level designs in two or three factors; usually requires several iterations of experimental design, analysis, and interim improvements. The goal is to minimize disruption to production but make big enough changes to quickly see effects on output variables. • Prerequisites for PLEX. • Good measurement system in place. • With little or no replicate runs, we want to minimize the effect of measurement error. • May require repeat measurements. • Adequate technical supervision to keep process controlled and monitored. • Extra attention to safety requirements and to avoiding upsets. • Stay within operating region. • Maintain environmental controls. • Cooperation of several functions required. • Why and when do we use PLEX? • Strong need to increase and/or improve production. • May have a sold-out product line. • Product line may have poor process capability. • Offline studies (lab or pilot scale) are not practical or meaningful. • Key process input variables (Xs) are not well determined, but we have the resources only to investigate a few at a time. A series of factorial experiments is required. • Beware, interactions may be obscured. • Would like to “optimize” (or reoptimize) the process while in production mode. PLEX process improvement roadmap • Form process improvement team. • Assess measurement system, e.g., gauge R&R. • Identify Xs and Ys, e.g., multi-vari, cause and effect, FMEA. • Choose two to four factors for first DOE. • Choose safe operating ranges for each factor. Ranges should be wide enough to reasonably see an active effect with no replication. • Set up 2k factorial design with optional, but recommended, center points. • Consider repeating one or more conditions. One approach is to run center point at beginning, middle, and end of design as a check for process drift or capability. • Prior to running design look at each treatment combination to see if there is a potential failure mode or unsafe condition. • Set up sampling plan. • Plan for technical supervision to minimize upset potential. • Randomize order of running, if practical. Otherwise, choose a run sequence that reduces number of changes. • Run process condition long enough to achieve steady state.
SL316XCh10Frame Page 265 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
265
• Return to standard conditions until DOE results are analyzed. • Based upon results, suggest interim process changes or subsequent DOEs or small confirmatory studies. • Continue until all Xs are investigated and process is optimized. EVOP — EVolutionary OPerations • What is EVOP? A process-improvement tool used while a process is running in the production mode for the optimization of plant performance; method that uses 22 or 23 factorials with replicates and center points; empowers operators to conduct experiment with minimal engineering support during normal operations; each experimental run is called a CYCLE. One cycle is one of the following: (0,0) = > (1,1) = > (1,–1) = > (–1,–1) = > (–1,1); eliminate randomization to minimize disruption and document effect estimates at the end of each cycle. Cycle continues in the hopes of collecting “sufficient evidence” of significant change in the Y for the various levels of X. Each set of cycles is called a phase. When enough data is collected through cycles in which a state of improved operations is identified, a phase is set to be completed. The results of each phase determine the new settings for subsequent phases of EVOP. Continue phases until X settings are optimized. Data from phases estimate a “response surface.” • Why use EVOP? The goal is to establish the settings of x1, x2, x3,… in the mathematical relationship: Y = f(x1, x2, x3,…) so as to optimize the process; provides information on process optimization with minor interruption to production; empowers operators and manufacturing personnel and is a cost-effective method to employee continual improvement. • How to apply EVOP: • Step 1: what is the problem to be solved? • Step 2: establish the experimental strategy. • Define Ys/Xs to be studied. • Select variable settings for phase I. • Determine maximum number of cycles for phase I. • Step 3: collect and analyze data during phase I, display on an information board to determine steps for Phase 2. • Step 4: repeat steps 2 and 3 for successive phases. • Step 5: implement optimal settings for Xs as S.O.P. • Step 6: rerun EVOP every 6 months to ensure optimal settings are maintained. Response surface methodology (RSM) • What is RSM? Once significant factors are determined, RSM leads the experimenter rapidly and efficiently to the general area of the optimum settings (usually using a linear model). The ultimate RSM objective is to determine the optimum operating conditions for the system or to determine a region of the factor space in which the operating specifications are satisfied (usually using a second-order model). Furthermore, response surfaces are used to optimize the results of a full
SL316XCh10Frame Page 266 Monday, September 30, 2002 8:09 PM
266
Six Sigma and Beyond: The Implementation Process
factorial DOE and create a second-order model if necessary. Therefore, RSM is good to a) determine average output parameters as functions of input parameters and b) process and product design optimization. • Response surface: the surface represented by the expected value of an output modeled as a function of significant inputs (variable inputs only): expected (Y) = f(x1, x2, x3,…xn) • Method of steepest ascent or decent: a procedure for moving sequentially along the direction of the maximum increase (steepest ascent) or maximum decrease (steepest descent) of the response variable using the following first order model: • Y (predicted) = b0 + Sbi Xi • Region of curvature: the region where one or more of the significant inputs will no longer conform to the first order model. Once in this region of operation most responses can be modeled using the following fitted second order model: • Y (predicted) = b0 + Sbi Xi + Sbii XiXi + Sbij XiXj • Central composite design: a common DOE matrix used to establish as valid second order model. • Coded variables: variables that are assigned arbitrary levels in a DOE study (–1, 1, A, B) • Uncoded variables: variables that are assigned process specific levels in a RSM study (10V, 20V) Regression • Regression and correlation • Use correlation to measure the strength of linear association between two variables, especially when one variable does not depend on the other. • Use correlation to benchmark equipment against a standard or another similar piece of equipment. • Use regression to predict one variable from another (it may be easier and more cost-efficient). • Use regression to provide evidence that key input variables explain the variation in the response variable or to determine whether different input variables are related to one other. Correlation limitations • Correlation explores linear association. It does not imply a cause-andeffect relationship. • Two variables may be perfectly related in a manner other than linear, and the correlation coefficient will be close to zero. For example, the relationship could be curvilinear. This emphasizes the importance of plots. • The linear association between two variables may be due to a third variable not under consideration. Sound judgment and scientific knowledge are necessary to interpret the results and validity of correlation analysis.
SL316XCh10Frame Page 267 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
267
• Some statisticians argue that correlation analysis should only be used when one dependency exists, i.e., when it is not clear which variable depends on the other. • In correlation analysis, it is assumed that both the X and Y variables are random, i.e., X is not fixed to study the dependency of Y. Linear regression uses — quantifies the relationship between a response variable and one or more predictor variables. Four general uses are: • Prediction: the model is used to predict the response variable of interest, especially when this response is difficult or expensive to measure. Emphasis is not given to capturing the role of each input variable with strict preciseness. • Variable screening: the model is used to detect the importance of each input variable in explaining the variation in the response. Important variables are kept for further study. • System explanation: the model is used to explain how a system works. Finding the specific role of each input variable is essential in this case. Various models that define different roles for the inputs are typically in competition. • Parameter estimation: the model is used primarily to find specific ranges, size and magnitudes of the regression coefficients. Line regression assumptions Simple regression — fitted line Plot interpreting the output Regression — residual plots Simple polynomial regression Interpreting the results Assessing the predictive power of the model Matrix plots — scatter plots with many Xs Correlation with many Xs The output — R2 Coefficient of determination (r2) Multiple regression — beware of multicollinearity When to use multiple regression — when process or noise input variables are continuous and the output is continuous, multiple regression can be used to investigate the relationship between the Xs (process and/or noise) Ys. Three types of multiple regression What is a quality system? A quality system is an organization’s agreed-upon method of doing business. It is not to be confused with a set of documents that are meant to satisfy an outside auditing organization (i.e., ISO 900x). This means a quality system represents the actions, not the written words, of an organization. Typical elements of a quality system are: • Quality policy • Organization for quality (does not mean quality department!) • Management review or quality • Quality planning (how to launch and control products and processes)
SL316XCh10Frame Page 268 Monday, September 30, 2002 8:09 PM
268
Six Sigma and Beyond: The Implementation Process
• • • • • •
Design control Data control Purchasing Approval of materials for ongoing production Evaluation of suppliers Verification of purchased product (does not mean incoming inspection!) • Product identification and traceability • Process control • Government safety and environmental regulations • Designation of special characteristics • Preventative maintenance • Process monitoring and operator instructions • Preliminary capability studies (how to turn on a process) • Ongoing process performance requirements (how to run a process) • Verification of setups • Inspection and testing • Control of inspection, measuring, and test equipment • Calibration • Measurement system analysis • Control of nonconforming product • Corrective and preventative action • Handling, storage, packaging, preservation, and delivery • Control of quality audits (do what we say we do?) • Training • Service • Use of statistical techniques Aspects of control Quality systems = how we manage? Evolution of management style • First generation: management by doing — this is the first, simplest, most primitive approach: just do it yourself. We still use it. “I’ll take care of it.” It is an effective way to get something done, but its capability is limited. • Secondgeneration: management by directing — people found that they could expand their capacity by telling others exactly what to do and how to do it: a master craftsman giving detailed directions to apprentices. This approach allows an expert to leverage his or her time by getting others to do some of the work, and it maintains strict compliance with the expert’s standards • Third generation: management by results — people get tired of you telling them every detail of how to do their jobs and say “Just tell me what you want by when, and leave it up to me to figure out how to do it.” So you say, “OK, reduce inventories by 20% this year. I’ll reward or punish you based on how well you do. Good luck.” All three approaches have appropriate applications in today’s organizations.
SL316XCh10Frame Page 269 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
269
Are they being used appropriately? • Third generation sounds logical. Its approach is widely taught and used and is appropriate where departmental objectives have little impact on other parts of the organization. • Third generation has serious, largely unrecognized flaws we can no longer afford. For example, we all want better figures: higher sales, lower costs, faster cycle times, lower absenteeism, lower inventory. How do we get better figures? • Improve the system. Make fundamental changes that improve quality, prevent errors, and reduce waste. For example, reducing inprocess inventory by increasing the reliability of operations. • Distort the system. Get the demanded results at the expense of other results. “You want lower inventories? No problem!” Inventories miraculously disappear — but schedule, delivery, and quality suffer. Expediting and premium and freight go up. Purchasing says, “You want lower costs? No problem!” Purchasing goes down saving the company millions, but it never shows up on the bottom line. Manufacturing struggles with the new parts, increasing rework and overtime. Quality suffers… • Distort the figures. Use creative accounting. “Oh, we don’t count those as inventory anymore…..that material is now on consignment from our supplier.” The basic system did not change. Control methods agenda Integrating with lean manufacturing Ranking control methods (the strategy) Types of control methods Product vs. process Automatic vs. manual Control plan Control methods are a form of Kaizen Control methods • SPC • S.O.P • Type III corrective action = inspection: implementation of a short-term containment action that is likely to detect the defect caused by the error condition. Containments are typically audits or 100% inspection. • Type II corrective action = flag: improvement made to the process that will detect when the error condition has occurred. This flag will shut down the equipment so that the defect will not move forward. • Type I corrective action = countermeasure: improvement made to the process that will eliminate the error condition from occurring. The defect will never be created. This is also referred to as a long-term corrective action in the form of mistake-proofing or design changes. • Product monitoring SPC techniques (on Ys) • Precontrol (manual or automatic) • X-bar & R or X & MR charts (manual or automatic)
SL316XCh10Frame Page 270 Monday, September 30, 2002 8:09 PM
270
Six Sigma and Beyond: The Implementation Process
• • • • • • • •
p and np charts (manual or automatic) c and u charts (manual or automatic) Process control SPC techniques (on Xs) Mistake-proofing (automatic) X-bar & R or X & MR (manual or automatic) EWMA (automatic) Cusum (automatic) Realistic tolerancing (manual or automatic)
The control plan is a living document that is used to document all your process control methods. It is a written description of the systems for controlling parts and processes (or services). The control plan, because it is a living document, should be updated to reflect the addition or deletion of controls based on experience gained by producing parts (or providing services). The immediate GOAL of the quality system (QS): During the control phase of the QS methodology: • The team should 5S the project area. • The team should develop standardized work instructions. • The team should understand and assist with the implementation of process and product control systems. • The team should document all of the above and live by what they have documented. The long-term vision of the quality system — the company and all of its suppliers have a quality system that governs the ways in which products and services are bought, sold, and produced. • The company should be 5S in all areas. • The company should develop standardized work instructions and procedures. • The company should understand and assist with the implementation of process and product control systems. • The company should document all of the above and live by what they have documented. Introduction to statistical process control (SPC) What is SPC? SPC as a control method The goal and methodology Advantages and disadvantages Components of an SPC control chart Where to use SPC charts How to implement SPC charts Types of control charts and examples SPC flowchart • SPC is the basic tool for studying variation and using statistical signals to monitor and improve process performance. This tool can be applied to ANY area: manufacturing, finance, sales, etc. Most companies perform
SL316XCh10Frame Page 271 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
271
SPC on finished goods (Ys), rather than process characteristics (Xs). The first step is to use statistical techniques to control our company’s outputs. It is not until we focus our efforts on controlling those inputs (Xs) that control our outputs (Ys) that we realize the full gain of our efforts to increase quality, productivity, and lower costs. • What is SPC? All processes have natural variability (due to common causes) and unnatural variability (due to special causes). We use SPC to monitor and or improve our processes. Use of SPC allows us to detect special cause variation through out-of-control signals. These outof-control signals cannot tell us why the process is out of control, only that it is. Control charts are the means through which process and product parameters are tracked statistically over time. Control charts incorporate upper and lower control limits that reflect the natural limits of random variability in the process. These limits should not be compared to customer specification limits. Based on statistical principles, control charts allow for the identification of unnatural (nonrandom) patterns in process variables. When the control chart signals a nonrandom pattern, we know special cause variation has changed the process. The actions we take to correct nonrandom patterns in control charts are the key to successful SPC usage. Control limits are based on establishing ± 3 sigma limits for the Y or X being measured. Process improvement and control charts Benefits of control chart systems • Proven technique for improving productivity • Effective in defect prevention • Prevents unnecessary process adjustments • Provides diagnostic information • Provides information about process capability Control chart roadmap • Select the appropriate variable to control. • Select the data-collection point. (Note: if variable cannot be measured directly, a surrogate variable can be identified.) • Select type of control chart. • Establish basis for rational subgrouping. • Determine appropriate sample size and frequency. • Determine measurement method/criteria. • Determine gauge capability. • Perform initial capability study to establish trial control limits. • Set up forms for collecting and charting data. • Develop procedures for collection, charting, analyzing, and acting on information. • Train personnel. • Institutionalize the charting process. Control chart types There are many types of control charts; however, the underlying principles of each are the same. The proper type is chosen utilizing knowledge of both
SL316XCh10Frame Page 272 Monday, September 30, 2002 8:09 PM
272
Six Sigma and Beyond: The Implementation Process
SPC and knowledge of your process objectives. The chart type selection depends on: • Data type: attribute vs. variable • Ease of sampling; homogeneity of samples • Distribution of data: normal or non-normal? • Subgroup size: constant or variable? • Other considerations Control charts for variables data Control charts for attribute data Analysis of patterns on control charts: • One point outside the three-sigma limit • Two of three outside the two-sigma limit • Four of five outside the one-sigma limit • Cycles • Trend • Stratification • Seven consecutive on one side of the center line Advantages of control chart systems: • Proven technique for improving productivity. • Effective in defect prevention. • Prevent unnecessary process adjustments. • Provide diagnostic information. • Provide information about process capability. • Can be used for both attribute and variable data types. Disadvantages of control chart systems: • Everyone must be well trained and periodically retrained. • Data must be gathered correctly. • Mean and range/standard deviation must be calculated correctly. • Data must be charted correctly. • Charts must be analyzed correctly. • Reactions to patterns in charts must be appropriate — every time! Precontrol charts — traditionally, precontrol has been perceived as an ineffective tool, and most quality practitioners still remain skeptical of its benefits. This view originated due to the fact that the limits of the three precontrol regions are commonly calculated based on the process specifications, thus resulting in overreactions and inducing more variability to a process instead of reducing it. In the Six Sigma breakthrough strategy, precontrol is implemented after the improve phase. The zones are calculated based on the process after improvements are made, so its distribution is narrow and tight compared to the specification band. Specification limits are not used in calculating these zones, so we encounter units in the yellow or red zones before actual defects are produced. Where to use SPC charts: • When a mistake-proofing device is not feasible • Identify processes with high RPNs from the FMEA
SL316XCh10Frame Page 273 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
273
• Evaluate “current controls” column of the FMEA to determine the gaps in the control plan. Does SPC make sense? • Identify processes that are critical based on DOEs • Place charts only where necessary based on project scope. If a chart has been implemented, do not hesitate to remove it if it is not valueadded. • Initially, the process outputs may need to be monitored. • The goal: monitor and control process inputs and, over time, eliminate the need for SPC charts. Pareto Histogram Cause-and-effect diagram Interpreting the results Definition of lean manufacturing — a systematic approach to manufacturing which is based on the premise that anywhere work is being done, waste is being generated. A vehicle through which organizations can identify and reduce waste. A manufacturing methodology that will facilitate and foster a living quality system. The goal of lean manufacturing is total elimination of waste. Poka-yoke (mistake-proofing) Planning for waste elimination • Establish “permanent” control to prevent its reoccurrence. • The vision: continuous elimination of waste • Infrequent setups and long runs • Functional focus • If it ain’t broke don’t fix it • Specialized workers, engineers, and leaders • Good enough • Run it, repair it • Layoff • Management directs • Penalize mistakes • Make the schedule • Quick setups and short runs • Product focus • Fix it so it does not break • Multifunctionally skilled people • Never good enough, continual improvement • Do it right the first time • New opportunities • Leaders teach • Retrain • Make quality a priority There are seven elements of waste; they are waste of: • Correction • Overproduction
SL316XCh10Frame Page 274 Monday, September 30, 2002 8:09 PM
274
Six Sigma and Beyond: The Implementation Process
• • • • •
Processing Conveyance Inventory Motion Waiting
The first step toward waste elimination is identifying it. Black belt projects should focus efforts on one or more of these areas. 5S workplace organization — to ensure your gains are sustainable, you must start with a firm foundation. 5S standards are the foundation that supports all the phases of lean manufacturing. The system can only be as strong as the foundation it is built on. The foundation of a production system is a clean and safe work environment. Its strength is contingent upon the employee and company commitment to maintaining it. (As a black belt you set the goals high and accept nothing less. Each operator must understand that maintaining these standards is a condition of their employment.) Foundation of lean manufacturing — 5S overview 1. Sorting (decide what is needed). To sort out necessary and unnecessary items. To store oft-used items in the work area, store infrequently used items away from the work area and dispose of items that are not needed. 2. Storage: (arrangement of items needed straightened up in the work place). To arrange all necessary items. To have a designated place for everything. A place for everything and everything in its place. 3. Shining (sweep and cleanliness). To keep your area clean on a continuing basis. 4. Standardize. To maintain the workplace at a level that uncovers and makes problems obvious. To continuously improve plant by continuous assessment and actions. 5. Sustaining (training and disciplined culture). To maintain our discipline, we need to practice and repeat until it becomes a way of life. Benefits of 5S implementation • A cleaner workplace is a safer workplace. • Contributes to how we feel about our product, process, our company, and ourselves. • Provides a customer showcase to promote our business. • Product quality and especially contaminants will improve. • Efficiency will increase. Some 5S focusing tools • “Red tag” technique (visual clearing up). This is a vital clearing-up technique. As soon as a potentially unnecessary item is identified, it is marked with a red tag so that anybody can see clearly what may be eliminated or moved. The use of red tags can be one secret to a company’s survival, because it is a visible way to identify what is not
SL316XCh10Frame Page 275 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • • • • •
275
needed in the workplace. Red tags ask why an item is in a given location and support the first “S” — sort. Tips for tagging: • We all tend to look at items as personal possessions. They are company possessions. We are the caretakers of the items. • An outsider can take the lead in red tagging. Plant people take advantage of these “fresh eyes” by creating an atmosphere where they will feel comfortable in questioning what is needed. • Tag anything not needed. One exception: do not red tag people unless you want to be red tagged yourself! • If in doubt, tag it! Before and after photographs Improve area by area, each one completely Clear responsibilities Daily cross-department tours Schedule ALL critical customers to visit Regular assessments and “radar” metrics Red tag technique. The red tag technique involves the following steps: 1. Establish the rules for distinguishing between what is needed and what is not. 2. Identify needed and unneeded items and attach red tags to all potentially unneeded items. Write the specific reason for red tagging and sign and date each tag. 3. Remove red tag items and temporarily store them in an identified holding area. 4. Sort through the red tag items; dispose of those that are truly superfluous. Other items can eliminated at an agreed interval when it is clear that they have no use. Ensure that all stakeholders agree. 5. Determine ways to improve the workplace so that unnecessary items do not accumulate. 6. Continue to red tag regularly.
Standardized work — the one best way to perform each operation identified and agreed upon through general consensus (not majority rules). This becomes the standard work procedure. The affected employees should understand that once they have defined the standard, they will be expected to perform the job according to that standard. It is imperative that we all must understand the notion: variation = defects. Standardized work leads to reduced variation. Prerequisites for standardized work Standardized workflow Kaizen — continual improvement. The philosophy of incremental continual improvement, that every process can and should be continually evaluated and improved in terms of time required, resources used, resultant quality, and other aspects relevant to the process. The BB’s job, simply stated, is focused Kaizen. Our methodology for Kaizen is the Six Sigma breakthrough
SL316XCh10Frame Page 276 Monday, September 30, 2002 8:09 PM
276
Six Sigma and Beyond: The Implementation Process
strategy — DMAIC. Control is only sustained long term when the 5Ss and standardized work are in place. • Kaizen rules • Keep an open mind to change • Maintain a positive attitude • Never leave in silent disagreement • Create a blameless environment • Practice mutual respect every day • Treat others as you want to be treated • One person, one vote — no position, no rank • No such thing as a dumb question • Understand the thought process and then the Kaizen elements • Takt time • Cycle time • Work sequence • Standard WIP • Takt time determination • Kaizen process steps • Step 1. Create flowchart with parts and subassemblies. • Step 2. Calculate Takt time = net available time. • Step 3. Measure each operation — each assembly and subassembly as they are. To the extent an operator has to go to an assembly for something, measure walk time. Establish a baseline using time observation forms; note any setup time. • Step 4. Do a baseline standard work flow chart (should look like a spaghetti chart). • Step 5. Do a baseline percent loading chart. Review for each operator where the waste and walk time is. Look at this in close relationship to the process. • Step 6. Review the 5Ss. • Step 7. Consolidate, accumulate jobs to get them as close to Takt time as possible. Work with the operators. • Step 8. Observe measure, modify the new flow process. This should be a one piece flow process if we are producing to takt time. • Step 9. Complete the one-piece flow process and redo all baseline charts (you may consider overlaying these new results on top of the older data to display the improvement). Make a list of things to complete. • Step 10. Prepare presentation, share results. Kaizen presentation guidelines • Prepare overheads or a slide show for a 20-minute presentation • Ensure your presentation includes all of the Kaizen steps • Use whatever props or other devices to best explain your achievement • Include 10 minutes for Q and A
SL316XCh10Frame Page 277 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
277
• Each team member should participate in the presentation • Management needs to see and hear about the results of the team’s success JIT concepts (just in time) Kanban — A pull inventory system Poka Yoke— a methodology that helps build quality into the product and allows only good product to go to the next operator or customer. It focuses on the elimination of human errors. Key elements of mistake-proofing: • Distinction between error and defect • Source inspection • 100% inspection • Immediate action • “Red flag” conditions • Control/feedback logic • Guidelines for mistake proofing Mistake-proofing strategies • Do not make surplus products (high inventory makes poor quality difficult to see) • Eliminate, simplify, or combine operations • Use a transfer rather than process batch strategy • Involve everyone in error and defect prevention (standard practices, daily improvements, and mistake-proofing) • Create an environment that emphasizes quality work, promotes involvement and creativity, and strives for continual improvement Advantages of mistake proofing • No formal training programs required • Eliminates many inspection operations • Relieves operators from repetitive tasks • Promotes creativity and value adding operations • Contributes to defect-free work • Effectively provides 100% internal inspection without the associated problems of human fatigue and error
CONTROL
PLANS
A control plan is a logical, systematic approach for finding and correcting root causes of out-of-control conditions and will be a valuable tool for process improvement. A key advantage of the reaction plan form is its use as a troubleshooting guide for operators. A systematic guide of what to look for during upset conditions is valuable on its own. Key items of concern are: • What elements make up a control plan? • Why should we bother with them? • Who contributes to their preparation?
SL316XCh10Frame Page 278 Monday, September 30, 2002 8:09 PM
278
Six Sigma and Beyond: The Implementation Process
• How do we develop one? • When do we update them? • Where should the plan reside? Control plan strategy • Operate our processes consistently on target with minimum variation. • Minimize process tampering (overadjustment). • Assure that the process improvements that have been identified and implemented become institutionalized. ISO 9000 can assist here. • Provide for adequate training in all procedures. • Include required maintenance schedules. • Factors impacting a good control plan. Control plan components • Process map steps • Key process output variables, targets, and specs • Key and critical process input variables with appropriate working tolerances and control limits • Important noise variables (uncontrollable inputs) • Short- and long-term capability analysis results • Designated control methods, tools, and systems • SPC • Automated process control • Checklists • Mistake-proofing systems • Standard operating procedures • Workmanship standards Documenting the control plan • FMEA • Cause-and-effect matrix • Process map • Multi-vari studies • DOE Reaction plan and procedures • Control methods identify person responsible for control of each critical variable and details about how to react to out-of-control conditions. • Control methods include a training plan and process auditing system, e.g., ISO 9000. • Complicated methods can be referenced by document number and location changes in the process require changes to the control method. • Actions should be the responsibility of people closest to the process. • The reaction plan can simply refer to an SOP and identify the person responsible for the reaction procedure. • In all cases, suspect or nonconforming product must be clearly identified and quarantined. Questions for control plan evaluation. Key process input variables (Xs): • How are they monitored? • How often are they verified?
SL316XCh10Frame Page 279 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • • • •
279
Are optimum target values and specifications known? How much variation is there around the target value? What causes the variation in the X? How often is the X out of control? Which Xs should have control charts? Uncontrollable (noise) inputs. What are they? Are they impossible or impractical to control? Do we know how to compensate for changes in them? How robust is the system to noise? • Standard operating procedures — do they exist? Are they simple and understood? Are they being followed? Are they current? • Is operator training performed and documented? • Is there a process audit schedule? Maintenance procedures • Have critical components been identified? • Does the schedule specify who, what, and when? • Where are the manufacturer’s instructions? • Do we have a troubleshooting guide? • What are the training requirements for maintenance? • What special equipment is needed for measurement? What is the measurement capability? • Who does the measurement? How often is a measurement taken? How are routine data recorded? • Who plots the control chart (if one is used) and interprets the information? • What key procedures are required to maintain control? • What is done with product that is off spec? • How is the process routinely audited? • Who makes the audit? How often? How is it recorded? Control plan checklist • Documentation package • Sustaining the gains Issues to transitioning a project • Assure your project is complete enough to transition. • No loose ends — have at least a plan (project action plan) for everything not yet finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people within the impacted area and those people outside that the changes may affect. • Display, update, and communicate your project results in impacted area during all phases. Remember, no surprises, buy-in during all phases. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help; you are not a one-person show and do not have all the answers.
SL316XCh10Frame Page 280 Monday, September 30, 2002 8:09 PM
280
Six Sigma and Beyond: The Implementation Process
• Use data collection. • Idea generation (brainstorming events). • Create buy-in with the entire workcell/targeted area. • Project action plan. Project action plan (suggested format) • Sustaining the gain. • Changes must be permanent. • Changes must be built into daily routine. • A sampling plan and measurement system must be established and used for monitoring. • Responsibilities must be clear, accepted, and, if necessary, built into roles and responsibilities. • Develop and update procedures. • Train all involved. • Action plan solidified and agreed upon. Sustaining the gain — product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers Process changes • Physically change the process flow (5S the project area). • Develop visual indicators. • Establish or buy new equipment to aid assembly or test. • Poka-Yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely updates. • Make it someone’s job to review the metric and take action when needed. • Training. • Train everyone in the new process (do not leave until there is full understanding). Aspects of control • Benchmarks for world class performance. • Quality improvement rate of 68% per year. • Productivity improvement rate of 2% per month. • Lead-time is less than ten times the value-added time. • Continuous improvement culture. • Total employee involvement. • Reward and recognition. • Celebration.
SL316XCh10Frame Page 281 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
281
MANUFACTURING TRAINING – 4 WEEKS WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Manufacturing training Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Vision • Philosophy • Aggressive goal • Metric (standard of measurement) • Benchmark • Method • Vehicle for: • Customer focus • Breakthrough improvement • Continual improvement • People involvement • Defines the goals of the business • Defines performance metrics that tie to the business goals • Identifies projects using performance metrics that will yield clear business results • Applies advanced quality and statistical tools to achieve breakthrough financial performance • Goal • Performance target • Problem-solving methodology The strategy • Which business function needs it?
SL316XCh10Frame Page 282 Monday, September 30, 2002 8:09 PM
282
Six Sigma and Beyond: The Implementation Process
• Leadership participation. Six Sigma only works when leadership is passionate about excellence and willing to change. • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough phases • Define • Measure • Analyze • Improve • Control The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? COPQ data • Getting there through inspection • Six Sigma overview • Overall perspective • Manufacturing process picture • Defects and variation • Variation and process capability Process capability and improvement • The defect elimination system • Overall perspective • Defects and the hidden factory • Rolled-throughput yield vs. first-time yield • What causes defects? • Excess variation due to: • Manufacturing processes • Supplier (incoming) material variation • Unreasonably tight specifications (tighter than the customer requires) • Dissecting process capability Premise of Six Sigma sources of variation can be: • Identified • Quantified • Eliminated or controlled How do we improve capability? • Six Sigma, metrics, and continual improvement Six Sigma is characterized by: • Defining critical business metrics
SL316XCh10Frame Page 283 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
283
• Tracking them • Improving them using proactive process improvement • Continual improvement: defects per unit (DPU) drives plant-wide improvement. Defects per million opportunities (DPMO) allows for comparison of dissimilar products. • Calculating the product sigma level — sigma level allows for benchmarking within and across companies. • Metrics: Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt = e-dpu). Cost of poor quality and cycle time (throughput) are two others. Process steps, FTY and RTY Harvesting the fruit of Six Sigma PPM conversion chart Translating needs into requirements • Implementation Six Sigma success factors • Affects directly quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Insures a predictable factory Black belt execution strategy. Its purpose is to introduce roles and responsibilities and to describe the execution strategy. • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt in relationship to • Delivering successful projects using the breakthrough strategy. • Training and mentoring the local organization on Six Sigma • roles of a black belt. • Mentoring: • Cultivate a network of experts in the factory or on site. • Work with the operators. • Work with the process owners. • Work with all levels of management. • Teaching and coaching: • Provide formal training to local personnel regarding new tools and strategies • Become the conduit for information • Provide one-on-one support • Develop effective teams • Identifying and discovery: • Find new applications • Identify new projects • Surface new business opportunities • Connect the business through the customer and supplier • Seek best practices
SL316XCh10Frame Page 284 Monday, September 30, 2002 8:09 PM
284
Six Sigma and Beyond: The Implementation Process
• Being involved: • Sharing best practices throughout the organization • Being a spokesperson to the customer • Driving supplier performance • Getting involved with executive management • Becoming a future leader Prerequisites for black belts: • Breakthrough strategy training • Black belt instruction • Roles of the master black belt Role of executives: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma Roles of the master black belt: • Be the expert in the tools and concepts. • Develop and deliver training to various levels of the organization. • Certify the black belts. • Assist in the identification of projects. • Coach and support BB in project work. • Participate in project reviews to offer technical expertise. • Partner with the champions. • Demonstrate passion around Six Sigma. • Share best practices. • Take on leadership of major programs. • Develop new tools or modify old tools for application. • Understand the linkage between Six Sigma and the business strategy. Role of champion: • Will select black belt projects consistent with corporate goals. • Will drive the implementation of Six Sigma through public support and removal of barriers Green belt: • Will deliver successful localized projects using the breakthrough strategy. Six Sigma Instructor: • Will make sure each and every black belt candidate is certified in the understanding, usage, and application of the Six Sigma tools. BB Execution Strategy. Its purpose is to insure sources of variation in manufacturing and transactional processes that are appropriately and objectively identified, quantified, and controlled or eliminated. By using the breakthrough strategy, process performance which is sustained through well developed, documented, and executed process control plans. The goal, of course, is to achieve improvements in rolledthroughput yield, cost of poor quality, and capacity–productivity. To reach this goal, BBs use the DMAIC model, the Kano model, QFD and other tools and methodologies. The phases of process improvement are: • The define phase • Refine the project
SL316XCh10Frame Page 285 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
• • • •
285
Establish the “as is” process Identify customers and CTQs Identify goals and scope of project A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs critical tos — CTCost, CTDelivery, CTQuality The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. The expected result is a Pareto of Xs that are used as inputs into the FMEA and control plans. These are the CTPs — Critical to the Process, or anything that we can control or modify about our process that will help us achieve our objectives. • The Measurement Phase. Establish the performance baseline. A welldefined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is defects. This holds true for any problem statement, objective, and metric (percent defects, overtime, RTY, etc.). • Primary metric. A black belt needs to be focused, if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits. Do not confuse projected project benefits with your objective. Make sure you separate these two items. • There are times when you may achieve your objective, yet not see the projected benefits. This is because we can not control all issues. Need to tackle them in a methodical order. • Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. • Critical to matrix (cause and effects matrix). • Establish data-collection system. Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place you will not be able to determine whether you are making any improvements in your project. Establish this system such that you can historically
SL316XCh10Frame Page 286 Monday, September 30, 2002 8:09 PM
286
Six Sigma and Beyond: The Implementation Process
record the data you are collecting. This information should be recorded in a database that can be readily accessed. The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. • Measurement systems analysis. To determine whether the measurement system, defined as the gauge and operators can be used to precisely measure the characteristic in question. It is very important to make the point that — We are not evaluating part variability but gauge operator capability. Some guidelines are: • Determine the measurement capabilities for Ys • Need to be completed before assessing capability of Ys • These studies are called: • Gauge repeatability and reproducibility (GR&R) studies • Measurement systems analysis (MSA) • Measurement systems evaluation (MSE) • Indices: precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error (P/T ≤ 10% is desirable; P/T = 30% is marginal.) precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. • Capability studies. Used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and long-term (overall) estimates of capability indices are taught. Indices used assuming process is centered: Cp, Pp, and Zst; indices used to evaluate shifted process: Cpk, Ppk, and Zlt. • The analysis phase. Identify vital few Xs by identifying high risk input variables (Xs) from the failure modes and effects analysis (FMEA); to reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques; to determine the presence of and potential elimination of noise variables via multivari studies; to plan and document initial improvement activities. • Failure modes and effects analysis: • Documents effects of failed key inputs (Xs) on key outputs (Ys) • Documents potential causes of failed key input variables (Xs) • Documents existing control methods for preventing or detecting causes • Provides prioritization for actions and documents actions taken • Can be used as the document to track project progress • Multi-vari studies. Study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is to identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase; to take a first look at major input variables.
SL316XCh10Frame Page 287 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
287
Ultimately, multi-vari studies help select or eliminate variables for study in designed experiments. • The improvement phase. Determine the governing transformation equation through understanding the ideal function. The backbone of the process improvement is DOE (design of experiments). From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). This phase is characterized by a sequence of experiments, each based on the results to the previous study. Critical variables are identified during this process. Usually three to six Xs account for most of the variation in the outputs. Ultimately, the purpose of this phase is to control and focus on the continual improvement process. • The control phase: optimize, eliminate, automate, and or control vital few inputs; document and implement the control plan; sustain the gains identified; reestablish and monitor long-term delivered capability; implement continual improvement efforts (green belts at the functional area); execution strategy support systems; safety requirements; maintenance plans defined; system to track special causes; required and critical spare parts list; troubleshooting guides; control plans for both short and long term; SPC charts for process monitoring; inspection points and metrology control; workmanship standards; and others? Potential project deliverables • Define: • Identification of customers • Identification of customers’ needs • Identify the “as is” process • Formulate the goal and scope of the project • Update the project charter • Measure: • Project definition: • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones • Data-collection system • Measurement systems analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, sigma level, DPU, RTY) • Graphical and statistical tools: • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review
SL316XCh10Frame Page 288 Monday, September 30, 2002 8:09 PM
288
Six Sigma and Beyond: The Implementation Process
• Analyze • Project definition: • Problem description • Project metrics • Passive process analysis: • Graphical analysis • Multi-vari studies • Hypothesis testing • DOE planning sheet • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review • Improve • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review • Control • Project Definition: • Problem description • Project • Optimization of Ys (RSM/EVOP) • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review
SL316XCh10Frame Page 289 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
289
Rolled-throughput yield The classical perspective of yield Simple first-time yield = traditional yield Measuring first-pass yield Rolled-throughput yield Normalized yield Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as an “opportunity count.” In terms of quality, each product or process characteristic represents a unique “opportunity” to either add or subtract value. (Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar.) Hidden factory DPMO • Nonvalue-add rules: no opportunity count should be applied to any operation which does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. • Supplied components rules: each supplied part provides one opportunity. Supplied materials such as, machine oil, coolants, etc. do not count as supplied components. • Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. • Sanity check rule: will applying counts in these operations take my business in the direction it is intended to go? If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be counter to the company objective. Hence it would not provide an opportunity. Once you define an “opportunity,” however, you must institutionalize that definition to maintain consistency. Introduction to software package used. The instructor should provide information about the software at least in the following areas. • Purpose of using the software • Capabilities of the software: • Cut and paste
SL316XCh10Frame Page 290 Monday, September 30, 2002 8:09 PM
290
Six Sigma and Beyond: The Implementation Process
• Formatting data • Numeric vs. alpha columns • Date columns • Entering data • Graphing • Basic statistics • Help menu • Normality testing • ANOVA • Z scores • Creating random data Basic statistics: • Mean • Median • Normal distribution • T test • Z test Fundamentals of improvement: • Variability — is the process on target with minimum variability? We use the mean to determine if process is on target. We use the standard deviation (σ) to determine spread • Stability — how does the process perform over time? Stability is represented by a constant mean and predictable variability over time. If process is not stable, identify and remove causes (Xs) of instability (obvious nonrandom variation). Determine the location of the process mean. Is it on target? If not, identify the variables (Xs) that affect the mean and determine optimal settings to achieve target value. Estimate the magnitude of the total variability. Is it acceptable with respect to the customer requirements (spec limits)? If not, identify the sources of the variability and eliminate or reduce their influence on the process • Can we tolerate variability? Even though there will always be variability present in any process, we can tolerate variability if: a) the process is on target, b) the total variability is relatively small compared to the process specifications, and c) the process is stable over time Types of outputs (data): • Attribute data (qualitative) • Variable data (quantitative) • Discrete (count) data • Continuous data Selecting statistical techniques. There are statistical techniques available to analyze all combinations of input/output data. Statistical distributions. We can describe the behavior of any process or system by plotting multiple data points for the same variable over time, across products on different machines, etc. The accumulation of these data can be viewed as a distribution of values.
SL316XCh10Frame Page 291 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
291
Represented by: • Dot plots • Histograms • Normal curve or other “smoothed” distribution Population parameters vs. sample statistics: • Population: an entire group of objects that have been made or will be made containing a characteristic of interest. Very likely we will never know the true population parameters. • Sample: the group of objects actually measured in a statistical study. A sample is usually a subset of the population of interest. • Measures of central tendency: median, mean, and mode. • Measures of variability: • Range — numerical distance between the highest and the lowest values in a data set. • Variance (σ;2 s2) — the average squared deviation of each individual data point from the mean. (Emphasize that variances add. In fact, variances of the inputs add to calculate the total variance in the output.) • Standard deviation (σ; s) — the square root of the variance, most commonly used measurement to quantify variability. (Emphasize that standard deviations do not add.) The normal distribution is a distribution of data which has certain consistent properties. These properties are very useful in our understanding of the characteristics of the underlying process from which the data were obtained. Most natural phenomena and man-made processes are distributed normally, or can be represented as normally distributed. • Property 1: a normal distribution can be described completely by knowing only the mean, and standard deviation. • Property 2: the area under sections of the curve can be used to estimate the cumulative probability of a certain “event” occurring. • Property 3: the previous rules of cumulative probability closely apply even when a set of data is not perfectly normally distributed. Testing normality: • Normal probability plots • Chi-square • F test Data set: • Mining the data • Test data for normality • Conduct appropriate testing and analyses Capability analysis: • The need for capability • Types of capability analysis • Variable output • Attribute output
SL316XCh10Frame Page 292 Monday, September 30, 2002 8:09 PM
292
Six Sigma and Beyond: The Implementation Process
• The method • Long vs. short • Indices of capability • Z shift • Conversion from short to long term and vice versa • Additional capability topics • Box-cox transformation • Nonnormal data (transformable) • Nonnormal data (not transformable) Attribute measurement system: a measurement system that compares each part to a standard and accepts the part if this standard is met. • Screen: 100% evaluation of product using inspection techniques (an attribute measurement system). • Screen effectiveness: the ability of the attribute measurement system to properly discriminate good from bad. • Customer bias: operator has a tendency to hold back good product. • Producer bias: operator has a tendency to pass defective product. Purpose of Attribute R&R • To assess your inspection or workmanship standards against your customers’ requirements. • To determine if inspectors across all shifts, all machines, etc. use the same criteria to distinguish “good” from “bad.” • To quantify the ability of inspectors to accurately repeat their inspection decisions. • To identify how well these inspectors are conforming to a “known master” which includes: • How often operators decide to ship truly defective product • How often operators do not ship truly acceptable product • Discover areas where: • Training is needed • Procedures are lacking • Standards are not defined Attribute R&R — the method Variable gauge R&R • The ideal measurement system will produce “true” measurements every time it is used (zero bias, zero variance). • The study of measurement systems will provide information as to the percent variation in your process data which comes from error in the measurement. • It is also a great tool for comparing two or more measurement devices or two or more operators against one another. • MSE should be used as part of the criteria required to accept and release a new piece of measurement equipment to manufacturing. • It should be the basis for evaluating a measurement system that is suspected of being deficient. Possible sources of process variation
SL316XCh10Frame Page 293 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
293
Precision vs. accuracy Basic model Sources of measurement variation: • Knowledge to be obtained • How big is the measurement error? • What are the sources of measurement error? • Is the tool stable over time? • Is the tool capable for this study? • How do we improve the measurement system? Accuracy-related terms • Accuracy is the extent to which the average of the measurements deviates from the true value. The difference between the observed average value of measurements and the master value. The master value is an accepted, traceable reference standard (e.g., NIST). • True value — theoretically correct value (NIST standards). • Bias — average of measurements are different by a fixed amount, effects include: • Operator bias — different operators get detectably different averages for the same measurements on the same part. • Machine bias — different machines get detectably different averages for the same measurements on the same parts. Precision-related terms • Precision — total variation in the measurement system. Measure of natural variation of repeated measurements. Typical terms associated with precision are: Random error, spread, test/retest error. • Repeatability — the inherent variability of the measurement device. Variation that occurs when repeated measurements are made of the same variable under similar conditions, same part, same operator, same set-up, same units, same environmental conditions in the short term. It is estimated by the pooled (average) standard deviation of the distribution of repeated measurements. Repeatability is usually less than the total variation of the measurement system. Another way of looking at it is to think of it as the variation between successive measurements of the same part, same characteristic, by the same person using the same instrument. Also known as test–retest error; used as an estimate of short-term measurement variation. • Reproducibility — the variation that results when different conditions are used to make the same measurements with different operators, different set-ups, different test units, different environmental conditions, long-term measurement variation. It is estimated by the standard deviation of the averages of measurements from different measurement conditions. Another way of saying this: the difference in the average of the measurements made by different persons using the same or different instrument when measuring the identical characteristic on the same part.
SL316XCh10Frame Page 294 Monday, September 30, 2002 8:09 PM
294
Six Sigma and Beyond: The Implementation Process
• Linearity — a measure of the difference in accuracy or precision over the range of instrument capability. • Discrimination — the number of decimal places that can be measured by the system. Increments of measure should be at least one-tenth of the width of the product specification or process variation. • Stability (over time) — the distribution of measurements remains constant and predictable over time for both mean and standard deviation. No drifts, sudden shifts, cycles, etc. To ensure stability make sure you monitor and analyze control charts. • Correlation — a measure of linear association between two variables, e.g., two different measurement methods or two different laboratories. P/T and P/TV • “P to PV” is used to qualify a measurement system as capable of measuring to the total observed process variation. • “P to T” is used to qualify a measurement system as capable of measuring to a given product specification. Uses of P/T and P/TV (percent R&R) • The P/T ratio is the most common estimate of measurement system precision. This estimate may be appropriate for evaluating how well the measurement system can perform with respect to the specification. Specifications, however, may be too tight or too loose. • Generally, the P/T ratio is a good estimate when the measurement system is used only to classify production samples. Even then, if process capability (Cpk) is not adequate, the P/T ratio may give you a false sense of security. • The P/TV (percent R&R) is the best measure for the black belt. This estimates how well the measurement system performs with respect to the overall process variation. Percent R&R is the best estimate when performing process improvement studies. Care must be taken to use samples representing full process range. • The method — calculating percent R&R • By operator — shows if any operator had higher or lower readings (on average) than the others. • By part — shows the ability of all operators to obtain the same readings for each part. Also shows the ability of a measurement system to distinguish between parts (amount of overlap). Gauge R&R, Xbar, and R chart Percent R&R vs. capability • Handling poor gauge capability. If a dominant source of variation is repeatability (equipment), you need to replace, repair, or otherwise adjust the equipment. If, in consultation with the equipment vendor or upon searches of industry literature, you find that the gauge technology that you are using is “state-of-the-art” and is performing to its specifications, you should still fix the gauge. One temporary solution to this problem is to use signal averaging. If a dominant source of variation is the operator (reproducibility), you must address this via training and
SL316XCh10Frame Page 295 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
295
definition of the standard operating procedure. You should look for differences between operators to give you some indication as to whether it is a training, skill, and/or procedure problem. If the gauge capability is marginal (as high as 30% of study variation) and the process is operating at a high capability (Ppk greater than two), then the gauge is probably not hindering you and you can continue to use it. Controlling repeatability. Note: if you want to decrease your gauge error take advantage of the standard error square root of the sample. Measurement system evaluation questions: • Written inspection measurement procedure? • Detailed process map developed? • Specific measuring system and setup defined? • Trained or certified operators? • Instrument calibration performed in a timely manner? • Tracking accuracy? • Tracking percent R&R? • Tracking bias? • Tracking linearity? • Tracking discrimination? • Correlation with supplier or customer where appropriate? Measurement system analysis questions: • Have you picked the right measurement system? Is this measurement system associated with either critical inputs or outputs? • What do the precision, accuracy, tolerance, P/T ratio, percent R&R, and trend chart look like? • What are the sources of variation and what is the measurement error? • What needs to be done to improve this system? • Have we informed the right people of our results? • Who owns this measurement system? • Who owns troubleshooting? • Does this system have a control plan in place? • What’s the calibration frequency? Is that frequent enough? • Do identical systems match? Deliverables for week 2: • Project report • Title page • Problem statement summary page • Problem statement • CTQ • What is the defect? • Initial DPMO • Target DPMO (i.e., 90% reduction/99% reduction stretch) • Team • Benefits of the project (why are we doing this?) • Picture/drawing to allow audience to set reference • Process flow diagram
SL316XCh10Frame Page 296 Monday, September 30, 2002 8:09 PM
296
Six Sigma and Beyond: The Implementation Process
• Definition of the measurement system. What was defined as the measurement of study? How is this linked to the CTQ? • Measurement system validation • Show failures and what was learned • Show analysis and results • Initial capability study • Begin screening factors (C&E, FMEA, multi-vari) • Brief recap/summary • Next steps
WEEK 2 Review of key questions Review project questions, concerns Process performance metrics CP and PP CPK and PPK When to add/subtract ZSHIFT Metric conversion Multi-vari charts. Their purpose is to narrow the scope of input variables — leverage KPIVs (identify inputs and outputs). The following tools can be used to identify the inputs and outputs: • C&E matrix • FMEA • Fishbone • Short-term capability • Scatter plots • Correlation • Regression boxplots • Main effects • Interaction plots • ANOVAs, T-tests • Multi-vari defined — a graphical tool which, through logical subgrouping, analyzes the effects of categorical Xs on continuous Ys. A method to characterize the baseline capability of a process while either in production mode or via historical data. If in the production mode, the data used in a multi-vari study is collected for a relatively short period of time (2 weeks to 2 months), though the multi-vari study can continue until the full range of the output variable is observed (from low to high). Categorical Xs are typically used in multi-vari analysis. • Inputs that can be classified as attribute in nature. These types of inputs have levels assigned which are arbitrary in nature (operator A–operator B–operator C, or low–high, or machine 1–machine 2– machine 3).
SL316XCh10Frame Page 297 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
297
• We use the results to determine capability, stability, and potential relationships between Xs and Ys. • Performing a multi-vari • Step 1: plan the multi-vari • Identify the major areas of variation. • Show them on the project organization chart. • Decide how to take data in order to distinguish these major sources of variation. • Decide ahead of time how to graph the data so that possible variation will be visible. • Step 2: take data in order of production (not randomly) • Data should include the entire range of variation. • Step 3: take a representative sample (minimum of three) per group • Step 4: analyze the results • Is there an area that shows the greatest source of variation? • Are there cyclic or unexpected nonrandom patterns of variation? • Are the nonrandom patterns restricted to a single sample or more? • Are there areas of variation that can be eliminated (e.g., shift-to-shift variation)? Sampling Methods • Simple random sampling: if all possible samples of n experimental units are equally likely, the procedure to use is a simple random sample. • Characteristics of Simple Random Sampling: • Unbiased — every experimental unit has the same chance of being chosen • Independence — the selection of one experimental unit is not dependent on the selection of another • Stratified sample: divide the population into homogeneous groups and randomly sample from within each group. • Cluster sample: divides the sample into smaller homogeneous groups. Then the groups are randomly sampled. • Systematic sample: start with a randomly chosen unit and sample every kth unit thereafter. Sampling Plan A good sampling plan will capture all relevant sources of noise variability • Lot-to-lot • Batch-to-batch • Different shifts, different operators, different machines • Sample size rule of thumb: 30 Correlation and simple linear regression • Overview • Correlation coefficients • Correlation and causality
SL316XCh10Frame Page 298 Monday, September 30, 2002 8:09 PM
298
Six Sigma and Beyond: The Implementation Process
• • • •
• • • • •
Scatter plots Fitted line plots Simple regression Correlation is a measure of the strength of association between two quantitative variables (e.g., pressure and yield). Correlation measures the degree of linearity between two variables assumed to be completely independent of each other. Correlation coefficient, r, always lies between –1 and +1. Comparison of covariances Simple regression Regression equation Fitted line plot
The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if “n” is sufficiently high (> 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample mean decreases. The standard error will help us calculate confidence intervals (CIs). Significance of confidence intervals — statistics such as the mean and standard deviations are only estimates of the population mus and sigmas and are based on only one sample. Because there is variability in these estimates from sample to sample, we can quantify our uncertainty using statistically based CIs. Most of the time, we calculate 95% CIs; however, there is nothing sacred about this particular confidence. It may be anything. We interpreted 95% CI as approximately 95 out of a 100 CIs will contain the population parameter, or we’re 95% certain the population parameter is inside the interval. Population vs. sample Comparison of histograms Parametric CIs Confidence interval for the mean What is the t-distribution? The t-distribution is a family of bell-shaped distributions that are dependent on sample size. The smaller the sample size, the wider and flatter the distribution. CIs for proportions — CIs can also be constructed for fraction defective (p), where x = number of defect occurrences n = sample size p = x/n = proportion defective in sample For cases in which the number defective (x) is at least five and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval.
SL316XCh10Frame Page 299 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
299
HYPOTHESIS TESTING INTRODUCTION Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. • Passive: you have either directly sampled your process or have obtained historic sample data. • Active: you have made a modification to your process and then sampled. • Statistical testing provides objective solutions to questions which are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE. The null and alternate hypotheses. The method and the roadmap. Hypothesis testing answers the practical question of whether there is a real difference between _____ and _____. Tests of significance. Significance level (alpha and beta).
WEEK 3 Week 1 review Week 2 review General questions Questions, concerns about project Week 3 potential project deliverables • Project definition • Problem description • Project metrics • DOE planning • Inputs list • DOE planning sheet • Designed experiments • Analysis of experiments • Y = F (X1, X2, X3, …) • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review ANOVA review
SL316XCh10Frame Page 300 Monday, September 30, 2002 8:09 PM
300
Six Sigma and Beyond: The Implementation Process
DOE INTRODUCTION A systematic set of experiments that permits one to evaluate the effect of one or more factors without concern about extraneous variables or subjective judgments. It begins with the statement of the experimental objective and ends with the reporting of the results. It may often lead to further experimentation. It is the vehicle of the scientific method, giving unambiguous results that can be used for inferring cause and effect. Full factorials experiments Analyzing single factor experiments One-way analysis continued Comparing more than two groups Test of equal variances Pooled standard deviation Multiple comparisons Experimental design selection Inference space considerations Strategy of experimentation • Define the problem • Establish the objective • Select the output — responses (Ys) • Select the input factors (Xs) • Choose the factor levels • Select the experimental design and sample size • Collect the data • Analyze the data • Draw conclusions Barriers to effective experimentation Factor selection — narrowing down the list • Which factors do we include? The following sources provide insight. • FMEA/control plans or DCP • Cause and effect matrix • Multi-vari and hypothesis testing • Process mapping • Brainstorming • Literature review • Engineering knowledge • Operator experience • Scientific theory • Customer/supplier input • Global problem solving Choosing the levels for each factor • The levels of an input factor are the values of the input factor (X) being examined in the experiment (not to be confused with the output, Y). • For a quantitative (variables data) factor like temperature: if an experiment is to be conducted at two different temperatures, then the factor temperature has two levels.
SL316XCh10Frame Page 301 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
301
• For a qualitative (attributes data) factor like cleanliness: if an experiment is to be conducted using clean and not clean, then the factor cleanliness has two levels. Selecting the type of experiment design • Response surface methods • Full factorials with replication • Full factorials with repetition • Full factorials without replication or repetition • Screening or fractional designs • One factor at a time (OFAT) Ensuring internal and external validity • Internal validity. Randomization of experimental runs “spreads” the noise across the experiment. Blocking ensures noise is part of the experiment and can be directly studied. • Holding noise variables constant eliminates the effect of that variable but limits broad inferences. • External validity. Include representative samples from possible noise variables. • Threats to statistical validity • Low statistical power: sample size inappropriate. • Loose measurement systems inflate variability of measurements. • Random factors in the experimental setting inflate variability of measurement. • Randomization and sample size prevent threats. Planning questions • What is the measurable objective? • What will it cost? • How will we determine sample sizes? • What is our plan for randomization? • Have we talked to internal customers about this? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run? • Where is the proposal? Performing the experiment • Document initial information • Verify measurement systems • Ensure baseline conditions are included in the experiment • Make sure clear responsibilities are assigned for proper data collection • Always perform a pilot run to verify and improve data collection procedures! • Watch for and record any extraneous sources of variation • Analyze data promptly and thoroughly • Graphical • Descriptive • Inferential
SL316XCh10Frame Page 302 Monday, September 30, 2002 8:09 PM
302
Six Sigma and Beyond: The Implementation Process
• Always run one or more verification runs to confirm your results (go from narrow to broad inference) Final report/general advice • Planning sheet can be more important than running the experiment. • Make sure you have tied potential business results to your project. • Focus on one experiment at a time. • Don not try to answer all the questions in one study, rely on a sequence of studies. • Use two-level designs early. • Spend less than 25% of budget on the first experiment. • Always verify results in a follow-up study. • It is acceptable to abandon an experiment. • A final report is a must!! • Finally, push the envelope with robust levels, but think of the safety of the people and equipment. Steps to conduct a full factorial experiment • Step 1: state the practical problem and objective using the a DOE worksheet. • Step 2: state the factors and levels of interest. • Step 3: select the appropriate sample size. • Step 4: create a computer software experimental data sheet with the factors in their respective columns. Randomize the experimental runs in the data sheet. • Step 5: conduct the experiment. • Step 6: construct the ANOVA table for the full model, use either: a) balance ANOVA or b) DOE > analyze factorial design. • Step 7: review the ANOVA table and eliminate effects with p-values above 0.05. Run the reduced model to include those p-values that are deemed significant. • Step 8: analyze the residuals of the reduced model to ensure we have a model that fits. Calculate the fits and residuals. Factorial experiments • GLM procedure for unbalanced designs • Residual analysis • Analyzing the two- and three-way interaction • Analysis of main effects • Epsilon-squared • Orthogonality • Describe the overall concepts of 2k factorials • Create standard order designs • Design and analyze 2k factorials using: • ANOVA • Effects plots • Graphs and residual plots • Advantages of 2k factorials • Require relatively few runs per factor studied
SL316XCh10Frame Page 303 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
303
• Can be the basis for more complex designs • Good for early investigations — can look at a large number of factors with relatively few runs • Lend themselves well to sequential studies • Analysis is fairly easy • Standard order of 2k designs Calculating the interaction effects — the interaction effect is represented by multiplying the columns to be represented. Mixed models (fixed and random factors) permitted ANOVA plus unbalanced or nested Used for 2,k 2k with centerpoints, 2k with blocking Notation is different than ANOVA procedures Steps for conducting a 2k factorial experiment (the reader will notice that steps 1–6 are the same as that for a full factorial. In fact we pick up where we left off.) • Step 7: analyze the residual plots to ensure we have a model that fits. • Step 8: investigate significant interactions (p-value < 0.05). Assess the significance of the highest order interactions first. For 3-way interactions unstack the data and analyze. • Stat > DOE > factorial plots > interaction plot. Once the highest order interactions are interpreted, analyze the next set of lower order interactions. • Step 9: investigate significant interactions (p-value < 0.05). • Step 10: state the mathematical model obtained. If possible calculate the epsilon-squared and determine the practical significance. • Step 11: translate the mathematical model into process terms and formulate conclusions and recommendations. • Step 12: replicate optimum conditions. Plan the next experiment or institutionalize the change. How to add center points in your designs Blocking with 2k factorials Confounding and blocking
WEEK 4 Review week 1 Review week 2 Review week 3 General questions Questions, concerns about project Week 4 potential project deliverables • Project definition • Project metrics • Process optimization • PLEX, EVOP, RSM, multiple regression
SL316XCh10Frame Page 304 Monday, September 30, 2002 8:09 PM
304
Six Sigma and Beyond: The Implementation Process
• • • • • • • • • •
Process controls Statistical product monitors Statistical process controls Document and sustain the gains Update FMEA Update control plan 5S the immediate project area Quality manual and related documentation Write the final report Review of designed experiments
FRACTIONAL FACTORIALS Why do fractional factorial experiments? As the number of factors increases, so does the number of runs. 2 × 2 factorial = 4 runs; 2 × 2 × 2 factorial = 8 runs; 2 × 2 × 2 × 2 factorial = 16 runs and so on. If the experimenter can assume higher-order interactions are negligible, it is possible to do a fraction of the full factorial and still get good estimates of low-order interactions. Major use of fractional factorials is screening: a relatively large number of factors in a relatively small number of runs. Screening experiments are usually done in the early stages of a process improvement project. Factorial experiments. Successful factorials are based on: a) the sparsity of effects principle and b) systems are usually driven by main effects and low-order interactions. Sequential experimentation Designing a fractional factorial What is PLEX? PLEX = PLant EXperimentation; a process-improvement tool for online use in full-scale production; uses simple factorial two-level designs in two or three factors; usually requires several iterations of experimental design, analysis, and interim improvements. The goal is to minimize disruption to production but make big enough changes to quickly see effects on output variables. • Prerequisites for PLEX • Good measurement system in place. • With little or no replicate runs, we want to minimize the effect of measurement error. • May require repeat measurements. • Adequate technical supervision to keep process controlled and monitored. • Extra attention to safety requirements and to avoiding upsets. • Stay within operating region. • Maintain environmental controls. • Cooperation of several functions required.
SL316XCh10Frame Page 305 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
305
• Why and when do we use PLEX? • Strong need to increase and/or improve production. • May have a sold-out product line. • Product line may have poor process capability. • Offline studies (lab or pilot scale) are not practical or meaningful. • Key process input variables (Xs) are not well determined, but we have the resources only to investigate a few at a time. A series of factorial experiments is required. • Beware, interactions may be obscured. • Would like to “optimize” (or reoptimize) the process while in production mode. PLEX process improvement roadmap • Form process improvement team. • Assess measurement system, e.g., gauge R&R. • Identify Xs and Ys, e.g., multi-vari, cause and effect, FMEA. • Choose two to four factors for first DOE. • Choose safe operating ranges for each factor. Ranges should be wide enough to reasonably see an active effect with no replication. • Set up 2k factorial design with optional, but recommended, center points. • Consider repeating one or more conditions. One approach is to run center point at beginning, middle, and end of design as a check for process drift or capability. • Prior to running design look at each treatment combination to see if there is a potential failure mode or unsafe condition. • Set up sampling plan. • Plan for technical supervision to minimize upset potential. • Randomize order of running, if practical. Otherwise, choose a run sequence that reduces number of changes. • Run process condition long enough to achieve steady state. • Return to standard conditions until DOE results are analyzed. • Based upon results, suggest interim process changes or subsequent DOEs or small confirmatory studies. • Continue until all Xs are investigated and process is optimized. EVOP — EVolutionary OPerations • What is EVOP? A process-improvement tool used while a process is running in the production mode for the optimization of plant performance; method that uses 22 or 23 factorials with replicates and center points; empowers operators to conduct experiment with minimal engineering support during normal operations; each experimental run is called a cycle. One cycle is one of the following: (0,0) = > (1,1) = > (1,–1) = > (–1,–1) = > (–1,1); eliminate randomization to minimize disruption and document effect estimates at the end of each cycle. Cycle continues in the hopes of collecting “sufficient evidence” of
SL316XCh10Frame Page 306 Monday, September 30, 2002 8:09 PM
306
Six Sigma and Beyond: The Implementation Process
significant change in the Y for the various levels of X. Each set of cycles is called a phase. When enough data is collected through cycles in which a state of improved operations is identified, a phase is set to be completed. The results of each phase determine the new settings for subsequent phases of EVOP. Continue phases until X settings are optimized. Data from phases estimate a “response surface.” • Why use EVOP? The goal is to establish the settings of x1, x2, x3,… in the mathematical relationship: Y = f(x1, x2, x3,…) so as to optimize the process; provides information on process optimization with minor interruption to production; empowers operators and manufacturing personnel and is a cost-effective method to employee continual improvement. • How to apply EVOP: • Step 1: what is the problem to be solved? • Step 2: establish the experimental strategy. • Define Ys/Xs to be studied. • Select variable settings for phase I. • Determine maximum number of cycles for phase I. • Step 3: collect and analyze data during phase I, display on an information board to determine steps for phase 2. • Step 4: repeat steps 2 and 3 for successive phases. • Step 5: implement optimal settings for Xs as S.O.P. • Step 6: rerun EVOP every 6 months to ensure optimal settings are maintained. Response surface methodology (RSM) • What is RSM? Once significant factors are determined, RSM leads the experimenter rapidly and efficiently to the general area of the optimum settings (usually using a linear model). The ultimate RSM objective is to determine the optimum operating conditions for the system or to determine a region of the factor space in which the operating specifications are satisfied (usually using a second-order model). Furthermore, response surfaces are used to optimize the results of a full factorial DOE and create a second-order model if necessary. Therefore, RSM is good to a) determine average output parameters as functions of input parameters and b) process and product design optimization. • Response surface: the surface represented by the expected value of an output modeled as a function of significant inputs (variable inputs only): expected (Y) = f(x1, x2, x3,…xn) • Method of steepest ascent or descent: a procedure for moving sequentially along the direction of the maximum increase (steepest ascent) or maximum decrease (steepest descent) of the response variable using the following first order model: • Y (predicted) = b0 + Sbi Xi • Region of curvature: the region where one or more of the significant inputs will no longer conform to the first order model. Once in this region of operation most responses can be modeled using the following fitted second order model:
SL316XCh10Frame Page 307 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
307
• Y (predicted) = b0 + Sbi Xi + Sbii XiXi + Sbij XiXj • Central composite design: a common DOE matrix used to establish as valid second order model • Coded variables: variables that are assigned arbitrary levels in a DOE study (–1, 1, A, B) • Uncoded variables: variables that are assigned process specific levels in a RSM study (10V, 20V) Regression • Regression and correlation • Use correlation to measure the strength of linear association between two variables, especially when one variable does not depend on the other. • Use correlation to benchmark equipment against a standard or another similar piece of equipment. • Use regression to predict one variable from another (it may be easier and more cost-efficient). • Use regression to provide evidence that key input variables explain the variation in the response variable or to determine whether different input variables are related to one other. Correlation limitations • Correlation explores linear association. It does not imply a cause-andeffect relationship. • Two variables may be perfectly related in a manner other than linear, and the correlation coefficient will be close to zero. For example, the relationship could be curvilinear. This emphasizes the importance of plots. • The linear association between two variables may be due to a third variable not under consideration. Sound judgment and scientific knowledge are necessary to interpret the results and validity of correlation analysis. • Some statisticians argue that correlation analysis should only be used when one dependency exists, i.e., when it is not clear which variable depends on the other. • In correlation analysis, it is assumed that both the X and Y variables are random, i.e., X is not fixed to study the dependency of Y. Linear regression uses — quantifies the relationship between a response variable and one or more predictor variables. Four general uses are: • Prediction: the model is used to predict the response variable of interest, especially when this response is difficult or expensive to measure. Emphasis is not given to capturing the role of each input variable with strict preciseness. • Variable screening: the model is used to detect the importance of each input variable in explaining the variation in the response. Important variables are kept for further study. • System explanation: the model is used to explain how a system works. Finding the specific role of each input variable is essential in this case.
SL316XCh10Frame Page 308 Monday, September 30, 2002 8:09 PM
308
Six Sigma and Beyond: The Implementation Process
Various models that define different roles for the inputs are typically in competition. • Parameter estimation: the model is used primarily to find specific ranges, size and magnitudes of the regression coefficients. Line regression assumptions Simple regression — fitted line Plot interpreting the output Regression — residual plots Simple polynomial regression Interpreting the results Assessing the predictive power of the model Matrix plots — scatter plots with many Xs Correlation with many Xs The output — R2 Coefficient of determination (r2) Multiple regression — beware of multicollinearity When to use multiple regression — when process or noise input variables are continuous and the output is continuous, multiple regression can be used to investigate the relationship between the Xs (process and/or noise) Ys. Three types of multiple regression What is a quality system? A quality system is an organization’s agreed-upon method of doing business. It is not to be confused with a set of documents that are meant to satisfy an outside auditing organization (i.e., ISO 900x). This means a quality system represents the actions, not the written words, of an organization. Typical elements of a quality system are: • Quality policy • Organization for quality (does not mean quality department!) • Management review or quality • Quality planning (how to launch and control products and processes) • Design control • Data control • Purchasing • Approval of materials for ongoing production • Evaluation of suppliers • Verification of purchased product (does not mean incoming inspection!) • Product identification and traceability • Process control • Government safety and environmental regulations • Designation of special characteristics • Preventative maintenance • Process monitoring and operator instructions • Preliminary capability studies (how to turn on a process) • Ongoing process performance requirements (how to run a process) • Verification of setups • Inspection and testing
SL316XCh10Frame Page 309 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
309
• Control of inspection, measuring, and test equipment • Calibration • Measurement system analysis • Control of nonconforming product • Corrective and preventative action • Handling, storage, packaging, preservation, and delivery • Control of quality audits (do what we say we do?) • Training • Service • Use of statistical techniques Aspects of control Quality systems = how we manage Evolution of management style • First generation: management by doing — this is the first, simplest, most primitive approach: just do it yourself. We still use it. “I’ll take care of it.” It is an effective way to get something done, but its capability is limited. • Second generation: management by directing — people found that they could expand their capacity by telling others exactly what to do and how to do it: a master craftsman giving detailed directions to apprentices. This approach allows an expert to leverage his or her time by getting others to do some of the work, and it maintains strict compliance with the expert’s standards. • Third generation: management by results — people get tired of you telling them every detail of how to do their jobs and say “Just tell me what you want by when, and leave it up to me to figure out how to do it.” So you say, “OK, reduce inventories by 20% this year. I’ll reward or punish you based on how well you do. Good luck.” All three approaches have appropriate applications in today’s organizations. Are they being used appropriately? • Third generation sounds logical. Its approach is widely taught and used and is appropriate where departmental objectives have little impact on other parts of the organization. • Third generation has serious, largely unrecognized flaws we can no longer afford. For example, we all want better figures: higher sales, lower costs, faster cycle times, lower absenteeism, lower inventory. How do we get better figures? • Improve the system. Make fundamental changes that improve quality, prevent errors, and reduce waste. For example, reducing inprocess inventory by increasing the reliability of operations. • Distort the system. Get the demanded results at the expense of other results. “You want lower inventories? No problem!” Inventories miraculously disappear — but schedule, delivery, and quality suffer. Expediting and premium and freight go up. Purchasing says, “You want lower costs? No problem!” Purchasing goes down saving the company millions, but it never shows up on the bottom line. Manufacturing
SL316XCh10Frame Page 310 Monday, September 30, 2002 8:09 PM
310
Six Sigma and Beyond: The Implementation Process
struggles with the new parts, increasing rework and overtime. Quality suffers… • Distort the figures. Use creative accounting. “Oh, we don’t count those as inventory anymore…..that material is now on consignment from our supplier.” The basic system did not change. Control methods agenda Integrating with lean manufacturing Ranking control methods (the strategy) Types of control methods Product vs. process Automatic vs. manual Control plan Control methods are a form of Kaizen Control methods • SPC • S.O.P • Type III corrective action = inspection: implementation of a short-term containment action that is likely to detect the defect caused by the error condition. Containments are typically audits or 100% inspection. • Type II corrective action = flag: improvement made to the process that will detect when the error condition has occurred. This flag will shut down the equipment so that the defect will not move forward. • Type I corrective action = countermeasure: improvement made to the process that will eliminate the error condition from occurring. The defect will never be created. This is also referred to as a long-term corrective action in the form of mistake-proofing or design changes. • Product monitoring SPC techniques (on Ys) • Precontrol (manual or automatic) • X-bar and R or X and MR charts (manual or automatic) • P and np charts (manual or automatic) • c and u charts (manual or automatic) • Process control SPC techniques (on Xs) • Mistake-proofing (automatic) • X-bar and R or X and MR (manual or automatic) • EWMA (automatic) • Cusum (automatic) • Realistic tolerancing (manual or automatic) The control plan is a living document that is used to document all your process control methods. It is a written description of the systems for controlling parts and processes (or services). The control plan, because it is a living document, should be updated to reflect the addition or deletion of controls based on experience gained by producing parts (or providing services). The immediate goal of the quality system (QS): During the control phase of the QS methodology:
SL316XCh10Frame Page 311 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
311
• The team should 5S the project area. • The team should develop standardized work instructions. • The team should understand and assist with the implementation of process and product control systems. • The team should document all of the above and live by what they have documented. The long-term vision of the quality system — the company and all of its suppliers have a quality system that governs the ways in which products and services are bought, sold, and produced. • The company should be 5S in all areas. • The company should develop standardized work instructions and procedures. • The company should understand and assist with the implementation of process and product control systems. • The company should document all of the above and live by what they have documented. Introduction to statistical process control What is statistical process control (SPC)? SPC as a control method The goal and methodology Advantages and disadvantages Components of an SPC control chart Where to use SPC charts How to implement SPC charts Types of control charts and examples
SPC FLOWCHART Class exercise Introduction to SPC • SPC is the basic tool for studying variation and using statistical signals to monitor and improve process performance. This tool can be applied to any area: manufacturing, finance, sales, etc. Most companies perform SPC on finished goods (Ys), rather than process characteristics (Xs). The first step is to use statistical techniques to control our company’s outputs. It is not until we focus our efforts on controlling those inputs (Xs) that control our outputs (Ys) that we realize the full gain of our efforts to increase quality, productivity, and lower costs. • What is SPC? All processes have natural variability (due to common causes) and unnatural variability (due to special causes). We use SPC to monitor and or improve our processes. Use of SPC allows us to detect special cause variation through out-of-control signals. These outof-control signals cannot tell us why the process is out of control, only that it is. Control charts are the means through which process and product parameters are tracked statistically over time. Control charts incorporate upper and lower control limits that reflect the natural limits
SL316XCh10Frame Page 312 Monday, September 30, 2002 8:09 PM
312
Six Sigma and Beyond: The Implementation Process
of random variability in the process. These limits should not be compared to customer specification limits. Based on statistical principles, control charts allow for the identification of unnatural (nonrandom) patterns in process variables. When the control chart signals a nonrandom pattern, we know special cause variation has changed the process. The actions we take to correct nonrandom patterns in control charts are the keys to successful SPC usage. Control limits are based on establishing ± 3 sigma limits for the Y or X being measured. Process improvement and control charts Benefits of control chart systems • Proven technique for improving productivity • Effective in defect prevention • Prevents unnecessary process adjustments • Provides diagnostic information • Provides information about process capability Control chart roadmap • Select the appropriate variable to control. • Select the data-collection point. (Note: if variable cannot be measured directly, a surrogate variable can be identified.) • Select type of control chart. • Establish basis for rational subgrouping. • Determine appropriate sample size and frequency. • Determine measurement method/criteria. • Determine gauge capability. • Perform initial capability study to establish trial control limits. • Set up forms for collecting and charting data. • Develop procedures for collection, charting, analyzing, and acting on information. • Train personnel. • Institutionalize the charting process. Control chart types There are many types of control charts; however, the underlying principles of each are the same. The proper type is chosen utilizing knowledge of both SPC and knowledge of your process objectives. The chart type selection depends on: • Data type: attribute vs. variable • Ease of sampling; homogeneity of samples • Distribution of data: normal or non-normal? • Subgroup size: constant or variable? • Other considerations Control charts for variables data Control charts for attribute data Analysis of patterns on control charts • One point outside the three-sigma limit • Two of three outside the two-sigma limit • Four of five outside the one-sigma limit
SL316XCh10Frame Page 313 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
313
• Cycles • Trend • Stratification • Seven consecutive on one side of the center line Advantages of control chart systems: • Proven technique for improving productivity. • Effective in defect prevention. • Prevent unnecessary process adjustments. • Provide diagnostic information. • Provide information about process capability. • Can be used for both attribute and variable data types. Disadvantages of control chart systems: • Everyone must be well trained and periodically retrained. • Data must be gathered correctly. • Mean and range/standard deviation must be calculated correctly. • Data must be charted correctly. • Charts must be analyzed correctly. • Reactions to patterns in charts must be appropriate — every time! Precontrol charts — traditionally, precontrol has been perceived as an ineffective tool, and most quality practitioners still remain skeptical of its benefits. This view originated due to the fact that the limits of the three precontrol regions are commonly calculated based on the process specifications, thus resulting in overreactions and inducing more variability to a process instead of reducing it. In the Six Sigma breakthrough strategy, precontrol is implemented after the improve phase. The zones are calculated based on the process after improvements are made, so its distribution is narrow and tight compared to the specification band. Specification limits are not used in calculating these zones, so we encounter units in the yellow or red zones before actual defects are produced. Where • • • • •
• •
to use SPC charts: When a mistake-proofing device is not feasible Identify processes with high RPNs from the FMEA Evaluate “current controls” column of the FMEA to determine the gaps in the control plan. Does SPC make sense? Identify processes that are critical based on DOEs Place charts only where necessary based on project scope. If a chart has been implemented, do not hesitate to remove it if it is not valueadded. Initially, the process outputs may need to be monitored. The goal: monitor and control process inputs and, over time, eliminate the need for SPC charts.
Pareto Histogram Cause-and-effect diagram Interpreting the results
SL316XCh10Frame Page 314 Monday, September 30, 2002 8:09 PM
314
Six Sigma and Beyond: The Implementation Process
Definition of lean manufacturing — a systematic approach to manufacturing which based on the premise that anywhere work is being done, waste is being generated. A vehicle through which organizations can identify and reduce waste. A manufacturing methodology that will facilitate and foster a living quality system. The goal of lean manufacturing is total elimination of waste. Poka-yoke (mistake-proofing) Planning for waste elimination • Establish “permanent” control to prevent its reoccurrence • The vision: continuous elimination of waste • Infrequent setups and long runs • Functional focus • If it ain’t broke don’t fix it • Specialized workers, engineers, and leaders • Good enough • Run it, repair it • Layoff • Management directs • Penalize mistakes • Make the schedule • Quick setups and short runs • Product focus • Fix it so it does not break • Multifunctionally skilled people • Never good enough, continual improvement • Do it right the first time • New opportunities • Leaders teach • Retrain • Make quality a priority There are seven elements of waste; they are waste of: • Correction • Overproduction • Processing • Conveyance • Inventory • Motion • Waiting The first step toward waste elimination is identifying it. Black belt projects should focus efforts on one or more of these areas. 5S workplace organization — to ensure your gains are sustainable, you must start with a firm foundation. 5S standards are the foundation that supports all the phases of lean manufacturing. The system can only be as strong as the foundation it is built on. The foundation of a production system is a clean and safe work environment. Its strength is contingent upon the employee and company commitment
SL316XCh10Frame Page 315 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
315
to maintaining it. (As a black belt you set the goals high and accept nothing less. Each operator must understand that maintaining these standards is a condition of their employment.) Foundation of lean manufacturing — 5S overview 1. Sorting (decide what is needed). To sort out necessary and unnecessary items. To store oft-used items in the work area, store infrequently used items away from the work area and dispose of items that are not needed. 2. Storage: (arrangement of items needed straight up the work place). To arrange all necessary items. To have a designated place for everything. A place for everything and everything in its place. 3. Shining (sweep and cleanliness). To keep your area clean on a continuing basis. 4. Standardize. To maintain the workplace at a level that uncovers and makes problems obvious. To continuously improve plant by continuous assessment and actions. 5. Sustaining (training and disciplined culture). To maintain our discipline, we need to practice and repeat until it becomes a way of life. Benefits of 5S implementation • A cleaner workplace is a safer workplace. • Contributes to how we feel about our product, process, our company, and ourselves. • Provides a customer showcase to promote our business. • Product quality and especially contaminants will improve. • Efficiency will increase. Some 5S focusing tools • “Red tag” technique (visual clearing up). This is a vital clearing-up technique. As soon as a potentially unnecessary item is identified, it is marked with a red tag so that anybody can see clearly what may be eliminated or moved. The use of red tags can be one secret to a company’s survival, because it is a visible way to identify what is not needed in the workplace. Red tags ask why an item is in a given location and support the first “S” — sort. Tips for tagging: • We all tend to look at items as personal possessions. They are company possessions. We are the caretakers of the items. • An outsider can take the lead in red tagging. Plant people take advantage of these “fresh eyes” by creating an atmosphere where they will feel comfortable in questioning what is needed. • Tag anything not needed. One exception: do not red tag people unless you want to be red tagged yourself! • If in doubt, tag it! • Before and after photographs • Improve area by area, each one completely • Clear responsibilities • Daily cross-department tours
SL316XCh10Frame Page 316 Monday, September 30, 2002 8:09 PM
316
Six Sigma and Beyond: The Implementation Process
• Schedule all critical customers to visit • Regular assessments and “radar” metrics • Red tag technique. The red tag technique involves the following steps: 1. Establish the rules for distinguishing between what is needed and what is not. 2. Identify needed and unneeded items and attach red tags to all potentially unneeded items. Write the specific reason for red tagging and sign and date each tag. 3. Remove red tag items and temporarily store them in an identified holding area. 4. Sort through the red tag items; dispose of those that are truly superfluous. Other items can eliminated at an agreed interval when it is clear that they have no use. Ensure that all stakeholders agree. 5. Determine ways to improve the workplace so that unnecessary items do not accumulate. 6. Continue to red tag regularly. Standardized work — the one best way to perform each operation identified and agreed upon through general consensus (not majority rules). This becomes the standard work procedure. The affected employees should understand that once they have defined the standard, they will be expected to perform the job according to that standard. It is imperative that we all must understand the notion: variation = defects. Standardized work leads to reduced variation. Prerequisites for standardized work Standardized workflow Kaizen — continual improvement. The philosophy of incremental continual improvement, that every process can and should be continually evaluated and improved in terms of time required, resources used, resultant quality, and other aspects relevant to the process. The BB’s job, simply stated, is focused Kaizen. Our methodology for Kaizen is the Six Sigma breakthrough strategy — DMAIC. Control is only sustained long term when the 5Ss and standardized work are in place. • Kaizen rules • Keep an open mind to change • Maintain a positive attitude • Never leave in silent disagreement • Create a blameless environment • Practice mutual respect every day • Treat others as you want to be treated • One person, one vote — no position, no rank • No such thing as a dumb question • Understand the thought process and then the Kaizen elements • Takt time • Cycle time • Work sequence
SL316XCh10Frame Page 317 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
317
• Standard WIP • Takt time determination • Kaizen process steps • Step 1. Create flowchart with parts and subassemblies • Step 2. Calculate Takt time = net available time • Step 3. Measure each operation — each assembly and subassembly as they are. To the extent an operator has to go to an assembly for something, measure walk time. Establish a baseline using time observation forms; note any setup time. • Step 4. Do a baseline standard work flow chart (should look like a spaghetti chart). • Step 5. Do a baseline percent loading chart. Review for each operator where the waste and walk time is. Look at this in close relationship to the process. • Step 6. Review the 5Ss • Step 7. Consolidate, accumulate jobs to get them as close to Takt time as possible. Work with the operators. • Step 8. Observe measure, modify the new flow process. This should be a one piece flow process if we are producing to takt time. • Step 9. Complete the one-piece flow process and redo all baseline charts (you may consider overlaying these new results on top of the older data to display the improvement). Make a list of things to complete. • Step 10. Prepare presentation, share results. Kaizen presentation guidelines • Prepare overheads or a slide show for a 20-minute presentation • Ensure your presentation includes all of the Kaizen steps • Use whatever props or other devices to best explain your achievement • Include 10 minutes for Q and A • Each team member should/must participate in the presentation • Management needs to see and hear about the results of the team’s success JIT concepts (just in time) Kanban — A pull inventory system Poka Yoke — a methodology that helps build quality into the product and allows only good product to go to the next operator or customer. It focuses on the elimination of human errors. Key elements of mistake-proofing: • Distinction between error and defect • Source inspection • 100% inspection • Immediate action • “Red flag” conditions • Control/feedback logic • Guidelines for mistake proofing Mistake-proofing strategies • Do not make surplus products (high inventory makes poor quality difficult to see)
SL316XCh10Frame Page 318 Monday, September 30, 2002 8:09 PM
318
Six Sigma and Beyond: The Implementation Process
• Eliminate, simplify, or combine operations • Use a transfer rather than process batch strategy • Involve everyone in error and defect prevention (standard practices, daily improvements, and mistake-proofing) • Create an environment that emphasizes quality work, promotes involvement and creativity, and strives for continual improvement Advantages of mistake proofing • No formal training programs required • Eliminates many inspection operations • Relieves operators from repetitive tasks • Promotes creativity and value adding operations • Contributes to defect-free work • Effectively provides 100% internal inspection without the associated problems of human fatigue and error
CONTROL
PLANS
A control plan is a logical, systematic approach for finding and correcting root causes of out-of-control conditions and will be a valuable tool for process improvement. A key advantage of the reaction plan form is its use as a troubleshooting guide for operators. A systematic guide of what to look for during upset conditions is valuable on its own. Key items of concern are: • What elements make up a control plan? • Why should we bother with them? • Who contributes to their preparation? • How do we develop one? • When do we update them? • Where should the plan reside? Control plan strategy • Operate our processes consistently on target with minimum variation. • Minimize process tampering (overadjustment). • Assure that the process improvements that have been identified and implemented become institutionalized. ISO 9000 can assist here. • Provide for adequate training in all procedures. • Include required maintenance schedules. • Factors impacting a good control plan. Control plan components • Process map steps • Key process output variables, targets, and specs • Key and critical process input variables with appropriate working tolerances and control limits • Important noise variables (uncontrollable inputs) • Short- and long-term capability analysis results • Designated control methods, tools, and systems • SPC
SL316XCh10Frame Page 319 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
319
• Automated process control • Checklists • Mistake-proofing systems • Standard operating procedures • Workmanship standards Documenting the control plan • FMEA • Cause-and-effect matrix • Process map • Multi-vari studies • DOE Reaction plan and procedures • Control methods identify person responsible for control of each critical variable and details about how to react to out-of-control conditions. • Control methods include a training plan and process auditing system, e.g., ISO 9000. • Complicated methods can be referenced by document number and location changes in the process require changes to the control method. • Actions should be the responsibility of people closest to the process. • The reaction plan can simply refer to an SOP and identify the person responsible for the reaction procedure. • In all cases, suspect or nonconforming product must be clearly identified and quarantined. Questions for control plan evaluation. Key process input variables (Xs): • How are they monitored? • How often are they verified? • Are optimum target values and specifications known? • How much variation is there around the target value? • What causes the variation in the X? • How often is the X out of control? • Which Xs should have control charts? • Uncontrollable (noise) inputs. What are they? Are they impossible or impractical to control? Do we know how to compensate for changes in them? How robust is the system to noise? • Standard operating procedures — do they exist? Are they simple and understood? Are they being followed? Are they current? • Is operator training performed and documented? • Is there a process audit schedule? Maintenance procedures • Have critical components been identified? • Does the schedule specify who, what, and when? • Where are the manufacturer’s instructions? • Do we have a troubleshooting guide? • What are the training requirements for maintenance? • What special equipment is needed for measurement? What is the measurement capability?
SL316XCh10Frame Page 320 Monday, September 30, 2002 8:09 PM
320
Six Sigma and Beyond: The Implementation Process
• Who does the measurement? How often is a measurement taken? How are routine data recorded? • Who plots the control chart (if one is used) and interprets the information? • What key procedures are required to maintain control? • What is done with product that is off spec? • How is the process routinely audited? • Who makes the audit? How often? How is it recorded? Control plan checklist • Documentation package • Sustaining the gains Issues to transitioning a project • Assure your project is complete enough to transition. • No loose ends — have at least a plan (project action plan) for everything not finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people within the impacted area and those people outside that the changes may affect. • Display, update, and communicate your project results in impacted area during all phases. Remember, no surprises, buy-in during all phases. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help; you are not a one-person show and do not have all the answers. • Use data collection. • Idea generation (brainstorming events). • Create buy-in with the entire workcell/targeted area. • Project action plan. Project action plan (suggested format) • Sustaining the gain. • Changes must be permanent. • Changes must be built into daily routine. • A sampling plan and measurement system must be established and for monitoring. • Responsibilities must be clear, accepted, and, if necessary, built into roles and responsibilities. • Develop and update procedures. • Train all involved. • Action plan solidified and agreed upon. Sustaining the gain — product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers
SL316XCh10Frame Page 321 Monday, September 30, 2002 8:09 PM
Six Sigma for Black Belts
321
Process changes • Physically change the process flow (5S the project area). • Develop visual indicators. • Establish or buy new equipment to aid assembly or test. • Poka-Yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely updates. • Make it someone’s job to review the metric and take action when needed. • Training. • Train everyone in the new process (do not leave until there is full understanding). Aspects of control • Benchmarks for world class performance. • Quality improvement rate of 68% per year. • Productivity improvement rate of 2% per month. • Lead-time is less than ten times the value-added time. • Continuous improvement culture. • Total employee involvement. • Reward and recognition. • Celebration.
SL316XCh10Frame Page 322 Monday, September 30, 2002 8:09 PM
SL316XCh11Frame Page 323 Monday, September 30, 2002 8:08 PM
11
Six Sigma for Green Belts
The intent of this training in the implementation process of Six Sigma is to familiarize the individuals who are about to assist the black belts with resolving projects that will improve customer satisfaction and the financial position of the organization. To be sure, this is a more intensive training than that of the orientation, as the material begins to be more technical in nature and more specific as to the tools and their applications. After all, the green belt is expected to actually do the work under the direct supervision of the black belt. The green belt needs to know not only why something is being done (elementary level) and how to do it, but also how it applies to his specific job. It is often suggested that simple simulated exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may involve defining a process and improving that process; providing five to ten operational definitions in that process; working with variable and attribute data; calculating the DPO; working with histograms, box plots, scatter plots, Pareto charts, and DOE setups; running the experiment with the aid of software; and several others. Because organizations and their goals are quite different we will provide the reader with a suggested outline of the training material for this green belt session. It should last 5 days and be taught by a black belt. The level of difficulty depends on the participants. Detailed information may be drawn from the first six volumes of this series.
INSTRUCTIONAL OBJECTIVES — GREEN BELT RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success. • Provide examples of the y and x terms in the expression y = f(x). • Interpret the expression y = f(x). Business Metrics • State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. • Define the nature of a performance metric. 323
SL316XCh11Frame Page 324 Monday, September 30, 2002 8:08 PM
324
Six Sigma and Beyond: The Implementation Process
• • • • • • • • • • • •
Identify the driving need for performance metrics. Explain the benefit of plotting performance metrics on a log scale. Provide a listing of at least six key performance metrics. Identify and define the principal categories associated with quality costs. Compute the COQ given the necessary background data. Provide a detailed explanation of how a defect can impact the classical COQ categories. Identify the fundamental contents of a performance metrics manual. Recognize the benefits of a metrics manual. Understand the purpose and benefits of improvement curves. Explain how a performance metric improvement curve is used. Explain what is meant by the phrase Six Sigma rate of improvement. Explain why a Six Sigma improvement curve can create a level playing field across an organization.
Six Sigma Fundamentals • • • • • • • • • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Identify the parts-per-million defect goal of Six Sigma. Recognize that defects arise from variation. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. Understand that work in process (WIP) is highly correlated to the rate of defects. Rationalize the statement “The highest-quality producer is the lowest-cost producer.” Understand that global benchmarking has consistently revealed four sigma as average while best-in-class is near the Six Sigma region. Draw first-order conclusions when given a global benchmarking chart. State the general findings that tend to characterize or profile a four sigma organization. Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. Provide a qualitative definition and graphical interpretation of standard deviation. Understand the driving need for breakthrough improvement vs. continual improvement.
SL316XCh11Frame Page 325 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
325
• Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from the point of view of quality, cost, and cycle-time. • Provide a brief history of Six Sigma and its evolution. • Understand the need for measuring those things that are critical to the customer, business, and process. • Define the various facets of Six Sigma and why Six Sigma is important to a business. • Define the magnitude of difference between three, four, five, and Six Sigma. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Provide a brief description for the outcome 1 – Y.rt. • Recognize that the quantity 1 + (1 – Y.rt) represents the number of units that must be produced to extract one good unit from a process. • Describe what is meant by the term mean time before failure (MTBF). • Interpret the temporal failure pattern of a product using the classical bathtub reliability curve. • Explain how process capability impacts the pattern of failure inherent in the infant mortality rate. • Provide a rational definition of the term latent defect and explain how such a defect can impact product reliability. • Explain how defects produced during manufacture influence product reliability, which, in turn, influences customer satisfaction. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Provide a brief description of the five sigma wall, what it is, why it exists, and how to get over it. • Define the two primary components of process breakthrough. • Provide a synopsis of what a statistically designed experiment is and what role it plays during the improvement phase of breakthrough. • Understand that the term sigma is a performance metric that only applies at the opportunity level. • Understand the role of questions in the context of management leadership. • Define the three primary sources of variation in a product. • Describe the general methodologies that are required to progress through the hierarchy of quality improvement. • Understand the key success factors related to the attainment of Six Sigma.
SL316XCh11Frame Page 326 Monday, September 30, 2002 8:08 PM
326
Six Sigma and Beyond: The Implementation Process
• Understand the basic elements of a sigma benchmarking chart. • Interpret a data point plotted on a sigma benchmarking chart. • Explain how the sigma scale of measure could be employed for purposes of strategic planning. • Understand how a Six Sigma product without a market will fail, while a Six Sigma product in a viable market is virtually certain to succeed. • Explain the interrelationship between the terms process capability, process precision, and process accuracy.
DEFINE Nature of Variables • Explain the term leverage variable and its implications for customer satisfaction and business success. • Explain what a dependent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Explain what an independent variable is and how this type of variable fits into the Six Sigma breakthrough strategy. • Provide a specific explanation of the term blocking variable and explain when such variables should be used in an experiment. Opportunities for Defects • Provide a rational definition of a defect. • Compute the defect-per-unit metric given a specific number of defects and units produced. • Provide a definition of the term opportunity for defect, recognizing the difference between active and passive opportunities. • Recognize the difference between uniform and random defects. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols. • Explain how process maps can be linked to the CT tree to identify problem areas.
SL316XCh11Frame Page 327 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
327
• Explain how process maps can be used to identify constraints and determine resource needs. • Define the key elements of a process map. Process Baselines • Conduct a complete baseline capability analysis (using a software package), interpret the results, and make valid recommendations. Six Sigma Projects • Define a Six Sigma black belt project reporting and review process. • Interpret each of the action steps associated with the four phases of process breakthrough. • Explain why the planning questions are so important to project success. • Explain how the generic planning guide can be used to create a project execution cookbook. • Create a set of criteria for selecting and scoping Six Sigma black belt projects. Six Sigma Deployment • • • • • • • • • • • • •
Provide a brief description of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Provide a brief description of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of a Six Sigma master black belt (SSMBB). Describe the roles and responsibilities of a SSMBB. Understand the SSBB instructional curriculum. Recognize that the SSBB curriculum sequence is correlated to the Six Sigma breakthrough strategy. Recognize the importance and provide a description of the plan-trainapply-review (PTAR) learning process. Provide a brief description of the key implementation principles and identify principal deployment success factors. List all of the planning criteria for constructing a Six Sigma implementation and deployment plan. Construct a generic milestone chart that identifies all of the activities necessary for successfully managing the implementation of Six Sigma. Develop a business model that incorporates and exploits the benefits of Six Sigma.
MEASURE Scales of Measure • Explain why survey questions that utilize the five-point Likert scale must often be reduced to two categories during analysis.
SL316XCh11Frame Page 328 Monday, September 30, 2002 8:08 PM
328
Six Sigma and Beyond: The Implementation Process
• Identify the four primary scales of measure and provide a brief description of their unique characteristics. Data Collection • Provide a specific explanation of the term replicate in the context of a statistically designed experiment. • Explain why the sequence of order in which an experiment takes place must be randomized and what can happen when this is not done. Measurement Error • Explain how a statistically designed single-factor experiment can be used to study and control for the influence of measurement error. • Explain how full factorial experiments can be employed to study and control for the influence of measurement error. • Explain how fractional factorial experiments can be used to study and control for the influence of measurement error. • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot. • Understand what a normal distribution, or typical normal histogram, is and how it is used to estimate defect probability. • Identify the circumstances under which the Poisson distribution could be applied to the analysis of product or transactional defects. • Understand the applied differences between the Poisson and binomial distributions. • Construct a histogram for a set of nonnormal data and isolate a transformation that will force the data to a normal condition. • Understand what the t distribution is and how it changes as degrees of freedom change. • Understand what the F distribution is and how it can be used to test the hypothesis that two variances are equal. Static Statistics • Provide a qualitative definition and graphical interpretation of variance. • Compute the sample standard deviation given a set of data. • Compute the mean, standard deviation, and variance for a set of normally distributed data.
SL316XCh11Frame Page 329 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
329
• Explain why a sample size of n = 30 is often considered ideal (in the instance of continuous data). • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the corresponding Z value of a specification limit given an appropriate set of data. • Convert a Z value into a defect probability given a table of areas under the normal curve. • Provide a graphical understanding of standard deviation and explain why it is so important to Six Sigma work. • Compute Z.usl and Z.lsl for a set of normally distributed data and then determine the probability of defect. • Compute Z.usl and Z.lsl for a set of nonnormal data with upper and lower specifications and then determine the probability of defect. Dynamic Statistics • Compute and interpret the total, inter-, and intragroup sums of squares for a given set of data. • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations. • Provide a practical explanation of what could account for a differential between a short-term Z value and a long-term Z value. • Explain the difference between inherent capability and sustained capability in terms of the standard deviation. • Describe the role and logic of rational subgrouping as it relates to the short-term and long-term standard deviations. • Explain the difference between dynamic mean variation and static mean offset. • Explain why the term instantaneous reproducibility (i.e., process precision) is associated with the short-term standard deviation. • Explain why the term sustained reproducibility is associated with the long-term standard deviation. • Recognize the four principal types of process centering conditions and explain how each impacts process capability. • Compute and interpret the within, between, and total sums of squares for a set of normally distributed data organized into rational subgroups.
ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft).
SL316XCh11Frame Page 330 Monday, September 30, 2002 8:08 PM
330
Six Sigma and Beyond: The Implementation Process
• Compute the throughput yield (Y.tp) given an average first-time yield and the number of related defect opportunities. • Provide a rational explanation of the differences between product yield and process yield. • Explain why the performance metric rolled-throughput yield (YA) represents the probability of zero defects. • Compute the probability of zero defects (Y.rt) given a specific number of defects and units produced. • Understand the impact of process capability and complexity on the probability of zero defects. • Construct a benchmarking chart using the product report option in the Minitab software program. • List some sources that could offer the data necessary to estimate a sigma capability. • Explain how throughput yield (Y.tp) and opportunity counts can be employed to establish sigma capability of a product/process. • Compute the normalized yield (Y.norm) given a rolled-throughput yield (Y.rt) value and a specific number of defect opportunities. • Compute the total defects-per-unit (TDPU) value given a rolled-throughput yield (Y.rt) value. • Provide a brief description of how one would implement and deploy the performance metric rolled-throughput yield (Y.rt). • Illustrate how a system-level DPU goal can be flowed down through a product/process hierarchy to assess the required CTO capability. • Illustrate how a series of CTQ capability values can be flowed up through a product/process hierarchy to establish the system DPU. Process Metrics • Compute and interpret the Cp index of capability. • Compute and interpret the Cpk index of capability. • Explain the theoretical and practical differences between Cp, Cpk, Pp, and Ppk. • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Recognize that a 1.5 sigma shift between sampling periods is typical and therefore can be used when quantification is not possible. • Understand the general guidelines for adjusting a Z value for the influence of shift and drift (when to add or subtract the shift value). • Compute the Cp and Cpk indices for a set of normally distributed data with upper and lower performance limits. • Explain why Cpk values will often not correlate to first-time yield information. • Compute and interpret Z.st and Z.lt for a set of normally distributed data organized into rational subgroups.
SL316XCh11Frame Page 331 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
331
• Compute and interpret Z.shift (static and dynamic) for a set of normally distributed data organized into rational subgroups. • Compute and interpret Cp, Cpk, Pp, and Ppk. • Explain how Cp, Cpk, Pp, and Ppk correlate to the four principal types of process centering conditions. • Show how Z.st, Z.lt, Z.shift (dynamic), and Z.shift (static) relate to Cp, Cpk, Pp, and Ppk. • Create and interpret a standardized computer process characterization report. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools • Understand, construct, and interpret a multi-vari chart, then identify areas of application. Simulation Tools • Create a series of random normal numbers with a given mean and variance. • Create k sets of subgroups where each subgroup consists of n samples from a normal distribution with a given mean and variance. • Create a series of random lognormal numbers and then transform the data to fit a normal density function. Statistical Hypotheses • Explain how a practical problem can be translated into a statistical problem and the benefits of doing so. • Explain what a statistical hypothesis is and why it is created and show the forms it may take in terms of the mean and variance. • Define the concept of alpha risk and provide several examples that illustrate its practical consequences. • Define the concept of statistical confidence and explain how it relates to alpha risk. • Define the concept of beta risk and provide several examples that illustrate its practical consequences. • Provide a detailed understanding of the contrast distribution and how it relates to the alternate hypothesis. • Explain what is meant by the phrase statistically significant difference and recognize that such differences do not imply practical difference. • Construct a truth table that illustrates how the null and alternate hypotheses interrelate with the concepts of alpha risk and beta risk. • Recognize that the extent of difference required to produce practical benefit is referred to as delta.
SL316XCh11Frame Page 332 Monday, September 30, 2002 8:08 PM
332
Six Sigma and Beyond: The Implementation Process
• Explain what is meant by the term power of the test and describe how it relates to the concept of beta risk. • Understand how sample size can impact the extent of decision risk associated with the null and alternate hypotheses. • Establish the appropriate sample size for a given situation when presented with a sample size table. • Describe the dynamic interrelationships between alpha, beta, delta, and sample size from a statistical as well as practical perspective. • List the essential steps for successfully conducting a statistically-based investigation of a practical real-world problem. • Provide a detailed understanding of the null distribution and how it relates to the null hypothesis. Continuous Decision Tools • Provide a general description of the term experimental error and explain how it relates to the term replication. • Provide a general description of one-way analysis of variance and discuss the role of sample size in it. • List the principal assumptions underlying the use of ANOVA and provide a general understanding of their practical impact if they are violated. • Recognize that when the intratreatment replicates are correlated, there is an adverse impact on experimental error. • Demonstrate how the total variation in single-factor experiments can be characterized analytically and graphically. • Demonstrate how the experimental error in an experiment can be partitioned from the total error for independent consideration. • Demonstrate how the intergroup variation in an experiment can be partitioned from the total error for independent consideration. • Compute the total sums of squares, as well as the intragroup and intergroup sums of squares for a single-factor experiment. • Define how degrees of freedom are established for each source of variation in a single-factor experiment. • Organize the sums of squares and degrees of freedom into an ANOVA table and compute the mean square ratios. • Determine the random sampling error probability related to any given mean square ratio and illustrate the effect of sample size. • Compute all post-hoc comparisons (i.e., pairwise t tests) in the instance that an F value proves to be statistically significant. • Compute and interpret the relative effect (i.e., sensitivity) of an experimental factor, create a main effects plot, and set tolerances. • Provide a conceptual understanding of statistical confidence interval and how it relates to the notion of random sampling error. • Understand what the distribution of sample averages is and how it relates to the central limit theorem.
SL316XCh11Frame Page 333 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
333
• Explain what the standard error of the mean is and demonstrate how it is computed. • Compute the tail area probability for a given Z value that is associated with the distribution of sample averages. • Compute the 95% confidence interval for the mean of a small data set and explain how it may be applied in practical situations. • Rationalize the difference between a one-sided test of the mean and a two-sided test of the mean. • Understand what the distribution of sample differences is and how it can be employed for testing statistical hypotheses. • Compute the 95% confidence interval for the mean of sample differences given two samples of normally distributed data. • Understand the nature of one- and two-sample t tests and apply these tests to an appropriate set of data. • Compute and interpret the 95% confidence interval from a sample variance using the chi-square distribution. • Explain how the 95% confidence interval from a sample variance can be used to test the hypothesis that two variances are equal. Discrete Decision Tools • Provide a brief explanation of the chi-square statistic and the conditions under which it can be applied. • Understand how the probability of a given chi-square value can be determined. • Recognize that the chi-square statistic can be employed as a goodness-of-fit test as well as a test of independence. • Compute the expected cell frequencies for any given contingency table. • Compute the chi-square statistic for a 2 × 2 contingency table and determine the probability of chance sampling error. • Determine the extent of association for a W contingency table using the contingency coefficient. • Compute the chi-square statistic for an n-way contingency table and determine the probability of chance sampling error. • Illustrate how the chi-square statistic and cross-tabulation can be utilized in the analysis of surveys. • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer. • Recognize that the cross-tabulation of two classification variables, each with two categories, is referred to as a W contingency table. • Explain how to establish the degrees of freedom associated with any contingency table. • Construct a 95% confidence interval for a Poisson mean and discuss how this can be used to test hypotheses about Poisson means. • Understand how to calculate the standard deviation for a set of data selected from a binomial distribution.
SL316XCh11Frame Page 334 Monday, September 30, 2002 8:08 PM
334
Six Sigma and Beyond: The Implementation Process
• Compute the 95% confidence interval for a proportion and explain how it can be used to test hypotheses about proportions. • Understand the nature of discontinuity and how to apply Yates correction to compensate for this effect. • Recognize that the square root of a chi-square is equal to Z for the special case where df = 1.
IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Explain the primary differences between a random-effects model and a fixed-effects model. • Identify the four principal families of experimental designs and what each family of designs is used for. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • Provide a specific explanation of the term confounding and identify several ways to control for this situation. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. • Explain how the settings (i.e., levels) of an experimental factor can significantly influence the outcome of an experiment. • Recognize that the most powerful application of modern statistics cannot rescue a poorly designed experiment. • Explain the term full factorial experiment and how it differs from a fractional factorial experiment. • Describe the overriding limitations of the classical test plan when two factors are involved and state several advantages of a full factorial design. • Show at least four ways that a two-factor, two-level full factorial design matrix can be displayed and communicated. • Understand the added value of a balanced and orthogonal design and the practical implications when these properties are not present. • Construct the vectored columns for a two-factor, two-level full factorial design, given Yates standard order. • Compute the relative effect for each experimental effect and display the results on a Pareto chart. • Design and conduct a two-factor, multilevel full factorial experiment and interpret the outcome from a statistical and practical perspective.
SL316XCh11Frame Page 335 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
335
• Provide a general description of a fractional factorial experiment and the inherent advantages that fractional arrays offer. • Understand why third-order and higher effects are most often statistically and practically insignificant. • Create a half fraction of a full factorial experiment by sorting on the highest-order interaction and then discern the pattern of confounding. • Recognize how an unreplicated fractional factorial design can be folded into a full factorial design with replication. • List the unique attributes associated with fractional factorial designs of resolution III, IV, and V. • Explain what happens to the experimental error term when a factor is collapsed out of the matrix by folding. • Explain how Plackett–Burman experimental designs are used and discuss their unique strengths and weaknesses. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response means as a basis for the plot. • Construct and interpret a main-effects plot for a fractional factorial experiment using the response variances as a basis for the plot. • Compute the sums of squares associated with each experimental effect in a fractional factorial experiment. • Create an ANOVA table and compute the mean square ratio for each experimental effect in a fractional factorial experiment. • Determine the random sampling error probability for any given MSR in a fractional factorial experiment. • Compute the relative effect for each experimental effect in a fractional factorial experiment and display the results in a Pareto chart. • Explain the phrase hidden replication and understand that this phenomenon does not preclude the a priori consideration of sample size. • Explain the phrase column contrast and show how it can be used to establish the factor effect and the related sums of squares. • Construct and interpret a main effects plot for a two-factor two-level experiment and display the 95% confidence intervals on the plot. • Construct and interpret an interaction plot for a two-factor, two-level experiment and display the 95% confidence intervals on the plot. • Compute the sums of squares associated with each experimental effect in a two-factor, two-level full factorial experiment. • Create an ANOVA table and compute the mean squares ratios for each experimental effect in a two-factor, two-level full factorial experiment. • Determine the random sampling error probability for any given mean square ratio in a two-factor, two-level full factorial experiment. • Implement center point within a two-factor, two-level full factorial experiment and estimate whether there is any statistically significant curvature. Robust Design Tools Nothing special.
SL316XCh11Frame Page 336 Monday, September 30, 2002 8:08 PM
336
Six Sigma and Beyond: The Implementation Process
Empirical Modeling Tools Nothing special. Tolerance Tools Nothing special. Risk Analysis Tools Nothing special. DFSS Principles Nothing special.
CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented. • Describe the unique characteristics of the precontrol method and compare precontrol to statistical process control charts. Continuous SPC Tools • Explain what the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. • Provide a conceptual understanding of each step associated with the general cookbook for control charts. • Explain how the use of rational subgroups forces nonrandom variations due to assignable causes to appear between sampling periods. • Explain how the control limits of an SPC chart are directly linked to the concepts associated with hypothesis testing. • Construct and interpret an X-bar and R chart for a set of normally distributed data organized into rational subgroups. • Illustrate how an X-bar and R chart can be used to study and control for measurement error and contrast this with the DOE/ANOVA method. • Construct and interpret an X-bar and R chart for a set of data (organized into rational subgroups) that is not normally distributed within groups. • Construct and interpret an individual’s chart for a set of normally distributed data collected over time. • Construct and interpret an individual’s chart for a set of nonnormally distributed data collected over time.
SL316XCh11Frame Page 337 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
337
• Construct and interpret an exponentially weighted moving average (EWMA) chart and highlight its advantages and disadvantages. • Provide a detailed understanding of how to adjust a process parameter using the method of bracketing and contrast this technique to other methods. Discrete SPC Tools • Construct and interpret a P chart and explain how the control limits for this chart are related to the confidence intervals of the binomial distribution. • Construct and interpret a U chart and explain how the control limits for this chart are related to confidence intervals for the Poisson distribution.
SIX SIGMA TRANSACTIONAL GREEN BELT TRAINING Introductions: Name Title or position Organization Background in quality improvement programs, statistics, etc. Hobbies/personal information Agenda Ground rules Exploring our values Six Sigma overview • Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Performance target • Practical meaning • Value • A problem solving methodology • Vision • Philosophy Aggressive goal — metric (standard of measurement) • Benchmark • Method — how are we going to get there? • Customer focus • Breakthrough improvement • Continual improvement • People involvement
SL316XCh11Frame Page 338 Monday, September 30, 2002 8:08 PM
338
Six Sigma and Beyond: The Implementation Process
Bottom line: Six Sigma defines the goals of the business • Defines performance metrics that tie to the business goals by identifying projects and using performance metrics that will yield clear business results. • Applies advanced quality and statistical tools to achieve breakthrough financial performance. The Six Sigma strategy • Which business function needs it? • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma a catalyst for leaders The principles of six sigma • We only act on what is known; therefore, we must look for appropriate and applicable data. • We know more when we search; therefore, we must have appropriate and applicable methodologies. • We search for what we question; therefore, we must be certain that what we question is related to customer satisfaction. • We question what we measure; therefore, we must be certain of our measuring capability. • If we question and measure, then decisions can be made based on data rather than “gut feelings.” Roles and responsibilities Executive management: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma publicly Champion: • Will select black belt projects consistent with corporate goals • Will drive the implementation of Six Sigma through public support and removal of barriers • Will be accountable for the performance of the BBs Master black belt: • They are the experts of Six Sigma tools and methodologies. They are responsible for training and coaching BBs, MBBs or shoguns, as we call them, and may also be responsible for leading large projects on their own. Black belt: • They are the main force of the Six Sigma philosophy. They are responsible for leading and teaching the Six Sigma methodology within the organization. They are also responsible for training the green belts, to ensure sources of variation in manufacturing and
SL316XCh11Frame Page 339 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
339
transactional processes are objectively identified, quantified, and controlled or eliminated. How? By using the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans such as defining the goal and identifying the model to use. • Goal: to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity-productivity. • To deliver successful projects using the breakthrough strategy • To train and mentor the local organization on Six Sigma • The model • Kano model • QFD — House of Quality • D-M-A-I-C Green belt: • Will deliver successful localized projects using the breakthrough strategy. • Will participate in larger BB DMAIC or DFSS projects. • Will lead other GB Six Sigma projects. • Will apply Six Sigma knowledge in daily work. Six Sigma instructor: • Will make sure every black belt candidate is certified in the understanding, usage, and application of the Six Sigma tools. Project selection — the most important component of the successful transactional project. • The Y = f(x) relationship. Ys are the functional items that the customer needs, wants, or expects and they are always thought of as “outputs.” Xs, on the other hand, are the specific requirements that will satisfy the Ys, and they are always thought of as “inputs.” It is imperative that the reader must understand that one Y may have multiple Xs and those Xs may have sub-Xs (noted as xs), etc. • Identify the Y and determine the Xs — the actual cascading process for Y to X to x to x1, x2, etc. The idea here is to start very broad and flow down to a level of a specific measurable problem. • Apply criteria to projects — obviously, each organization may have its own criteria; however, the following five seem to be generic enough to get you going in the right direction. a) Does the problem relate in a positive way to customer satisfaction? b) Does the problem repeat? c) Do you have control over the problem? d) Is the scope of the project narrow enough to be worked on? E) Do metrics exist? Can measurements be established in an appropriate amount of time? • Develop high-level problem statement that includes a) the specificity of the problem; b) descriptive statements about the problem (e.g., location, occurrence, etc. ) c) scope and list of data needed. The problem statement is a living description of the issue to be resolved and may be modified as the project evolves.
SL316XCh11Frame Page 340 Monday, September 30, 2002 8:08 PM
340
Six Sigma and Beyond: The Implementation Process
The DMAIC model — high-level overview. This model drives breakthrough improvement. • Define the selection of performance characteristics critical in meeting the customer’s expectations. • Measure the creation and validation of a measurement system. • Analyze the identification of sources of variation from the performance objectives. • Improve the discovery of process relationships and the establishment of new procedures. • Control the monitoring of implemented improvements to maintain gains and ensure corrective actions are taken when necessary. The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? In addition to the direct costs associated with finding and fixing defects, cost of poor quality also includes: • The hidden cost of failing to meet customer expectations the first time. • The hidden opportunity for increased efficiency. • The hidden potential for higher profits. • The hidden loss in market share. • The hidden increase in production cycle time. • The hidden labor associated with ordering replacement material. • The hidden costs associated with disposing of defects. Getting there through inspection • Defects and the hidden factory • Rolled-throughput yield vs. first time yield What causes defects? Excess variation due to a) manufacturing processes, b) supplier (incoming) material variation, and c) unreasonably tight specifications (tighter than the customer requires). Dissecting process capability — premise of Six Sigma: Sources of variation can be a) identified and b) quantified. Therefore, they can be controlled or eliminated. How do we improve capability? Six Sigma, metrics and continual improvement • Six Sigma is characterized by a) defining critical business metrics, b) tracking them, and c) improving them using proactive process improvement. Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt) • Yrt = e-dpu • Cost of poor quality and cycle time (throughput) are two other metrics • Continual improvement • Calculating the product sigma level Metrics • Defects per unit (DPU) drives plant-wide improvement.
SL316XCh11Frame Page 341 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
341
• Defects per million opportunities (DPMO) allows for comparison of dissimilar products. • Sigma level allows for benchmarking within and across companies. • Tracking trends in metrics. • Harvesting the fruit of Six Sigma. • PPM conversion chart. Translating needs into requirements Deployment success: if and only if Six Sigma • Affects directly quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Ensures a predictable factory • Black belt execution strategy with the support of management • Describe BB execution strategy • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt
THE DMAIC MODEL
IN
DETAIL
THE DEFINE PHASE The individual components of this phase are: a) define problem, b) identify customer, c) identify CTQs, d) map process, e) refine process scope, and f) update project charter.
WHO IS
THE
CUSTOMER?
• What does the customer want? • How can the organization benefit from fixing a problem? • A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs critical tos - CTCost, CTDelivery, CTQuality. • The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. • Result: a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs, critical to the process. This includes anything that we can control or modify about our process that will help us achieve our objectives.
MEASUREMENT PHASE The individual components of this phase are: a) identify measurement and variation, b) determine data type, c) develop data collection plan, d) perform MSA, e) perform
SL316XCh11Frame Page 342 Monday, September 30, 2002 8:08 PM
342
Six Sigma and Beyond: The Implementation Process
data collection, and f) perform capability analysis. The idea here is to establish the performance baseline. The measure phase — IMPORTANT!!! A well-defined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is defects. This holds true for any problem statement, objective, and metric (% defects, overtime, RTY, etc.). • Primary metric — a green belt needs to be focused; if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. There are times when you may achieve your objective yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order. Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data-collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. Establish data-collection system • Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place, you will not be able to determine whether you are making any improvements in your project. • Establish this system such that you can historically record the data you are collecting. • This information should be recorded in a database that can be readily accessed. • The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. • This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance.
SL316XCh11Frame Page 343 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
343
MEASUREMENT SYSTEMS ANALYSIS Purpose: to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. We are not evaluating part variability, but gauge operator capability. • Guidelines • Determines the measurement capabilities for Ys • Needs to be completed before assessing capability of Ys • These studies are called: gauge repeatability and reproducibility (GR&R) studies, measurement systems analysis (MSA) or measurement systems evaluation (MSE) • Indices: • Precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error. Ten percent is desirable • Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. Thirty percent is marginal Capability studies: used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and longterm (overall) estimates of capability indices are taught. (The reader may want to review Volume 1 or Volume 4 for the discussion on long and short capability.) • Indices used assuming process is centered: Cp, Pp, Zst • Indices used to evaluate shifted process: Cpk, Ppk, Zlt Measure: potential project deliverables • Project definition • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E Matrix, PFMEA, fishbones • Data-collection system • Measurement System(s) Analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, σ Level, DPU, RTY) • Graphical and statistical tools • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review
SL316XCh11Frame Page 344 Monday, September 30, 2002 8:08 PM
344
Six Sigma and Beyond: The Implementation Process
THE ANALYSIS PHASE The individual components of this phase are: a) review analysis tools, b) apply graphical analysis tools for both attribute and variable data (e.g., Pareto, histogram, run charts, box plot, scatter plot and so on to determine patterns of variation), and c) identify sources of variation. Purpose of the analysis phase • To identify high-risk input variables (Xs) from the failure modes and effects analysis (FMEA). • To reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques. • To determine the presence of and potential elimination of noise variables via multi-vari studies. • To plan and document initial improvement activities. • Failure modes and effects analysis • Documents effects of failed key inputs (Xs) on key outputs (Ys) • Documents potential causes of failed key input variables (Xs) • Documents existing control methods for preventing or detecting causes • Provides prioritization for actions and documents actions taken • Can be used as the document to track project progress • Multi-vari studies: study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is: • To identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase. • To take a first look at major input variables. • To help select or eliminate variables for study in designed experiments. • Identify vital few Xs. • Determine the governing transformation equation. Analyze: potential project deliverables • Project definition • Problem description • Project metrics • Passive process analysis • Graphical analysis • Multi-vari studies • Hypothesis testing • Updated PFMEA • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review
SL316XCh11Frame Page 345 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
345
THE IMPROVEMENT PHASE The individual components of this phase are: a) generate improvement alternatives, b) conduct a pilot study, c) validate improvement, d) create “should be” process map, e) update FMEA, and f) perform a cost benefit analysis. • DOE (design of experiments) is the backbone of process improvement. • From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). • This phase is characterized by a sequence of experiments, each based on the results of the previous study. The intent is to generate improvement alternatives. • Critical variables are identified during this process. • Usually three to six Xs account for most of the variation in the outputs. • Control and continuous improvement. • Perform a pilot. • Validate the improvement. • Create the “should be” process map. • Update the FMEA. • Perform preliminary cost benefit analysis. Improve: potential project deliverables • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review
THE CONTROL PHASE The individual components of this phase are: a) develop control strategy, b) develop control plan, and c) update SOP and training plan. The idea here is to implement long-term control strategy and methods. Develop an execution plan • Optimize, eliminate, automate, and control vital few inputs. • Document and implement the control plan. • Sustain the gains identified. • Reestablish and monitor long-term delivered capability.
SL316XCh11Frame Page 346 Monday, September 30, 2002 8:08 PM
346
Six Sigma and Beyond: The Implementation Process
• Implement continuous improvement efforts (this is perhaps the key responsibility of all the green belts at the functional area). • Provide execution strategy support systems. • Establish safety requirements. • Define maintenance plans. • Establish system to track special causes. • Draw up required and critical spare parts list. • Write troubleshooting guides. • Develop control plans. • Make SPC charts. • Buy process monitors. • Oversee inspection points. • Provide metrology control. • Set workmanship standards. • Others ? Control: potential project deliverables • Project definition: • Problem description • Project metrics • Optimization of Ys: • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review Additional items of discussion. The following items should be discussed at the appropriate and applicable complexity level of the participants. In some cases, some of the following items may be just mentioned but not discussed. Rolled throughput yield • The classical perspective of yield • Simple first-time yield = traditional yield • Measuring First Pass Yield Normalized yield • Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as
SL316XCh11Frame Page 347 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
• • • • • •
347
an opportunity count. In terms of quality, each product or process characteristic represents a unique opportunity to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar. Formulas to know Hidden factory Take away — rolled-throughput yield Integrates rework loops Highlights “high-loss” steps… Put project emphasis here!
DPMO, counting opportunities Nonvalue-add rules: an opportunity count should never be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. Supplied components rules: each supplied part provides one opportunity. Supplied materials, such as machine oil, coolants, etc., do not count as supplied components. Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. (Sanity check rule: “Will applying counts in these operations take my business in the direction it is intended to go?” If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be opposed to the company objective. Hence, it would not provide an opportunity. Once you define an opportunity, however, you must institutionalize that definition to maintain consistency. This opportunity, if it is good enough for the original evaluation, must also be good enough to be evaluated at the end of the project. In other words, the opportunity count must have the same base; otherwise it is meaningless.) Introduction to data • Description and definitions • What do you want to know? • Discrete vs. continuous data • Categories of scale • Nominal scale — nominal scales of measure are used to classify elements into categories without considering any specific property. Examples of nominal scales include “causes” on fishbone diagrams, yes/no, pass/fail, etc. • Ordinal scale — ordinal scales of measure are used to order or rank nominal (pass/fail) data based on a specific property. Examples of ordinal scales include relative height, Pareto charts, customer satisfaction surveys, etc.
SL316XCh11Frame Page 348 Monday, September 30, 2002 8:08 PM
348
Six Sigma and Beyond: The Implementation Process
• Likert scale (ordinal) — example rating scale ranges: five-point school grading system (A B C D E); seven-point numerical rating (1 2 3 4 5 6 7); verbal scale (excellent, good, average, fair, poor). • Interval and ratio scale — interval scales of measure are used to express numerical information on a scale with equal distance between categories, but no absolute zero. Examples are: temperature (°F and °C), a dial gauge sitting on top of a gauge block, comparison of differences, etc. Ratio scales of measure are used to express numerical information on a scale with equal distance between categories, but with an absolute zero in the range of measurement. • A tape measure, ruler, position Vs time at constant speed, and so on. Selecting Statistical Techniques At this point of the discussion the instructor may want to introduce a computer software package to facilitate the discussion of statistical tools. Key items of discussion should be: • Entering data into the program • Cutting and pasting • Generating random numbers • Importing and exporting data from databases, Excel, ASCII, etc. • Pull-down menus of the software (for general statistics, graphs, etc.) • Manipulate and change data • Basic statistics and probability distributions • Calculate the z scores and probability • Calculate capability • Control charts Discussion and practice of key statistical techniques and specific tools Basic statistics • Mean, median, mode, variance, and standard variation • Distributions • Normal, Z-transformation, normal and nonnormal probability plots, nonnormal, Poison, binomial, hypergeometric, t-distribution • Central limit theorem — very important concept. Emphasis must be placed on this theorem because it is the fundamental concept (backbone) of inferential statistics and the foundation for tools to be learned later this session. The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if n is sufficiently high (n > 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. Confidence intervals (CIs) are derived from the central limit theorem and are used
SL316XCh11Frame Page 349 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
349
by black belts to quantify a level of certainty or uncertainty regarding a population parameter based on a sample. • Degrees of freedom • Standard error • Confidence Parametric confidence intervals — the parametric confidence intervals assume a t-distribution of sample means and uses this to calculate confidence intervals. Confidence intervals for proportions — confidence intervals can also be constructed for fraction defective (p), where x = number of defect occurrences; n = sample size and p = x/n = proportion defective in sample. For cases in which the number defective (x) is at least 5 and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval. • Accuracy and precision • Defects per million • Population vs. sample • Sampling distribution of the mean • Concept of variation • Additive property of variances • Attribute or variable Types of data — variable and attribute • Rational subgroups • Data-collection plan — your data-collection plan and execution will make or break your entire project!!!!!!!!!!! Data-collection plan — ask yourself the following questions: • What do you want to know about the process? • What are the likely causes of variation in the process (Xs)? • Are there cycles in the process? • How long do you need to collect data to capture a true picture? • Who will be collecting the data? • How will you test your measurement system? • Are the operational definitions detailed enough? • How will you display the data? • Is data available? If not, how will you prepare data collection sheets? • Where could data collections occur? What are your correction plans? Process capability and performance • Process capability • Capability • Process characterization • Converting DPM to a Z value • Short-term vs. long-term • Indicating the spread
SL316XCh11Frame Page 350 Monday, September 30, 2002 8:08 PM
350
Six Sigma and Beyond: The Implementation Process
• Indicates the spread and center • Indicates spread and centering • Process shift — how much should we expect? Is 1.5σ enough? Where does it come from? • The map to the indicators and what do they mean Stability • Process control • Pooled vs. total variation • Short-term vs. long-term • Which standard deviation? • Area of improvement • What is good? Measurement system analysis • Why MSA? How does variation relate to MSA? • Measurement systems • Resolution • Bias • Accuracy vs. precision • Linearity Measurement tools • A simple gauge • Calibration • Consistency • Gauge R& R • GR&R with ANOVA • Indices (Cp, Cpk, Pp, Ppk) • Cp is the “potential” capability of your process assuming you are able to eliminate all nonrandom causes. In addition, Cp assumes the process is centered. This metric is also called “process entitlement” or the best your process could ever hope to perform in the short term. In order to calculate this metric you need a close approximation for short-term standard deviation (which is not always available). • Cpk and Ppk use the mean, not only the tolerance band, to estimate capability. The term CPKmin(Cpklower, Cpkupper) is stated as the shortest numerical distance between the mean and the nearest spec limit. How do you know if your gauge is good enough? Introduce definition of quality (ISO 8402) Control charts • Variable and attribute (X-bar and s, X-bar and R, IndX and MR, p, c, etc.) • Multi-vari charts: the purpose of these charts is to narrow the scope of input variables and, therefore, to identify the inputs and outputs (KPIVs and KPOVs)
SL316XCh11Frame Page 351 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
351
HYPOTHESIS TESTING INTRODUCTION Why learn hypothesis testing? Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled. Statistical testing provides objective solutions to questions that are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE.) • Hypothesis testing terms that you need to remember • Steps in hypothesis testing • Hypothesis testing roadmap • Hypothesis testing description • The null and alternate hypotheses • The hypothesis testing form • Test for significance • Significance level • Alpha risk — this alpha level requires two things: a) an assumption of no difference (Ho) and b) a reference distribution of some sort — producer’s risk • Beta risk — consumer’s risk
PARAMETERS
VS.
STATISTICS
Parameters deal with populations and are generally denoted with Greek letters. Statistics deal with samples and are generally denoted with English letters. There is no substitute for professional judgment. It is true that in hypothesis testing we answer the practical question: “Is there a real difference between _____ and _____ ?” However, we use relatively small samples to answer questions about population parameters. There is always a chance that we selected a sample that is not representative of the population. Therefore, there is always a chance that the conclusion obtained is wrong. With some assumptions, inferential statistics allows us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P value) of a wrong conclusion. What is signal-to-noise ratio Managing change Measures and rewards An introduction to graphical methods • Pareto • Histogram • Run chart • Scatter plot
SL316XCh11Frame Page 352 Monday, September 30, 2002 8:08 PM
352
Six Sigma and Beyond: The Implementation Process
• Correlation vs. causality • Boxplot • Hypothesis tests for means • Comparison of means t Distribution Hypothesis testing for attribute data Useful definitions Hypothesis tests: proportions Chi-square test for independence Chi-square test Chi-square test for a relationship ANOVA Why ANOVA?
INTRODUCTION
TO
DESIGN
OF
EXPERIMENTS
What is experimental design? Organizing the way in which one changes one or more input variables (Xs) to see if any of them, or any combination of them, effects the output (Y) in a significant way. A well-designed experiment eliminates the effect of all possible Xs except the ones that you changed. Typically, if the output variable changes significantly, it can be tied directly to the input X variable that was changed and not to some other X variable that was not changed. The real power of experimentation is that sometimes we get lucky and find a combination of two or more Xs that make the Y variable perform even better! • Benefits of DOE • Why not one factor at a time? • Types of experiments • Classes of DOE • Terms used in DOE • Main effects and interactions • Contrast • Yates standard order • Run order for a DOE • Strategy of experimentation • Barriers to effective experimentation Focus on the X-Y relationship Trial and error One factor at a time Full factorial experiment Things to watch for in experiments Randomization • Repetition and replication • 2-K factorials • Advantages of 2-K factorials • Standard order of 2-K designs
SL316XCh11Frame Page 353 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
353
• Interactions • Interaction effects • Interactions for the three-way design • Main effects • Cube plots • Types of 2k factorials Center points and blocking • Adding center points • In two-level designs, there is a risk of missing a curvilinear relationship. Inclusion of center points is an efficient way to test for curvature without adding a large number of extra runs. • Confounding and blocking • Residuals analysis • Residuals
SCREENING DESIGNS These designs are a powerful tool at analyzing multiple factors and interactions. The designs combine the flexibility of reduced run size without compromising information. One word of caution: do not reduce the experiment too far. By doing fewer runs, you may not obtain the desired level of information. Factorial experiments — the success of fractional factorials is based on the assumption that main effects and lower order interactions are generally the key factors. Full factorials can usually be derived from a fractional factorial experiment once nonsignificant factors are eliminated. • Fractional factorials • Design resolution • Choosing a design • Notation • Alias structure Planning experiments • Team involvement • Maximize prior knowledge • Identify measurable objectives • FMEA on all steps of the execution • Replication and repetition consideration • Verify and validate data collection and analysis procedures Steps to experimentation • Define the problem. What is the objective of the experiment? • Establish the objective. • Select the response variables. • Select the independent variables. • Choose the variable levels. • Select the experimental design. • Sequential experimentation
SL316XCh11Frame Page 354 Monday, September 30, 2002 8:08 PM
354
Six Sigma and Beyond: The Implementation Process
• • • •
Select experimental design Screening/fractional factorial Full factorial/partial Consider the sample plan: how many runs can we afford? (The more runs or samples, the better understanding and confidence in the result.) How are we controlling noise and controllable variables that we know about? • What is our plan for randomization? • Walk through the experiment • Collect data • Analyze data • Draw statistical conclusions • Replicate results • Draw practical solutions Implement solutions • Understand the current process • Is output qualitative or quantitative? • (A vs. B) or (50 vs. 100) ? • What is the baseline capability • Is your process under statistical control? • Is the measurement system adequate? • Factor selection • Which factors (KPIV’s) do we include? • Where should they come from? • Process map • Cause and effects matrix • FMEA • Multi-vari study results • Brainstorming (fishbone) • Process knowledge • Operator experience • Customer/supplier input • Level selection. After the test factors are identified, we must set the levels of those factors we want to test. What is the right level differentiation to obtain the information needed? If the levels are too wide or narrow, nothing will be gained. Level guideline: 20% above and below the specs. If no specs, +/– 3 sigma from the mean. • What will the experiment cost? • Are all of the necessary players involved (informed)? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run and walked through the process? • Has the necessary paperwork been completed? • Make sure the MSA has been validated.
SL316XCh11Frame Page 355 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
355
• Budget and timelines (The goal in DOE: to find a design that will produce a specific desired amount of information at a minimum cost to the company.) Four phases of designed experiments: • Planning: careful planning involves clearly defining the problem of interest, the object of the experiment, and the environment in which the experiment will be carried out. • Screening: initial experiments aim to reduce the number of potentially influential variables to a vital few. Screening allows us to focus process improvement efforts on the most important variables. Screening designs include two-level full and fractional factorials, general full factorials, and Plackett-Burman. • Optimization: after we have identified the vital few variables by screening, we need to determine the best values in order to optimize a process; for example, we may want to maximize a yield or reduce product variability. Optimization designs include full factorial designs (twolevel and general) and response surface designs (central composite and Box–Behnken). • Verification: we can perform a follow-up experiment at the predicted best process conditions to confirm optimization results. Fractional factorial designs Purpose: to determine which main effects (factors) are important. Key features: 1. Know which resolution you are running: always two-level factorials. 2. Useful to estimate mostly main effects (not interactions). 3. They can be built up to a higher-order blocked factorial design. 4. Limited to 15 runs. 5. Don’t expect more than what the design will provide. Recommendation: use these designs when you need to narrow down the list of important factorials. They are easy to interpret and cost effective. Screening designs (full or fractional) Purpose: to investigate how seven factors or less interact to drive a process. Key features: 1. Two-level factorials. Resolution IV, V, or higher. 2. General full factorials. 3. These allow estimation of at least two-way interactions. 4. They can model weak curvature through center points and can be built up into a response surface (blocked central composite) design to model more pronounced curvature. 5. They provide direction for further experimentation in search of an optimal solution. Recommendation: this is the design most often used in industry. They are good, low-cost, all-purpose designs.
SL316XCh11Frame Page 356 Monday, September 30, 2002 8:08 PM
356
Six Sigma and Beyond: The Implementation Process
Response surface designs Purpose: to model responses that exhibit quadratic (curvilinear) relationships with the factors. Key features: 1. Recommended for nonsequential experiments. (Only one shot!) 2. Use when extreme combinations cannot be run. 3. Excellent for optimizing since curvature is typically seen around optimal. 4. Designs are costlier (more runs). Factors of interest should be low in number. 5. These can be used to minimize variation. 6. These can be used to put the process on target, maximize, or minimize a measure of interest. How do I sustain the improvement? Tools to assure process remains in control Keys to success • Early involvement of all work cell/department members • Update all affected parties (including supervisors/managers regularly) • Get buy-in — no surprises! • Poka yoke the process • Establish frequent measurement • Establish procedures for the new/updated process • Train everyone — assign responsibilities • Monitor the results How do I transition my project? • Assure your project is complete enough to transition. • No loose ends — a plan (project action plan) for everything not finalized • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people in impacted area. • Display your project in impacted area during all phases. Remember, no surprises. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help. • Data collection. • Idea generation (brainstorming events). • What is a project action plan? It is a documented communication tool (contract) which allows you to identify: • What is left to do to complete your project. • Who is responsible to carry out each task. • When they should have it complete. • How it should be accomplished.
SL316XCh11Frame Page 357 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
357
Do I have to have one? Only if there are unfinished tasks to your improvement process that you expect others to carry out after the transition. (The tasks must be negotiated and agreed to.) Who will monitor the plan for implementation/completion? Both you and the responsible supervisor/manager who assumes ownership. Who has ultimate responsibility? The owner of each task and the responsible supervisor/manager. Product changes • Revise drawings by submitting EARs • Work with process, test, and product engineers Process changes • Physically change the process flow (5S the project area). To ensure your gains are sustainable you must start with a firm foundation. 5S standards are the foundation that supports all the phases of Six Sigma manufacturing. The foundation of a production system is a CLEAN and SAFE work environment. Its strength is dependent upon employee commitment to maintaining it. • Develop visual indicators. Create a visual factory. • Establish/buy new equipment to aid assembly/test. • Poka yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely evaluations. • Make it someone’s job to review the metric and take action when needed. • Training — train everyone in the new process. (Don’t leave until there is full understanding.)
CONTROL PLANS The control plan provides a written summary description of the system for controlling parts and processes; it is used to minimize process and product variation and describes the actions that are required at each phase of the process including receiving, in-process, final assembly, and shipping, to ensure that all process outputs will be in a state of control. A control plan for operational actions such as ordering, order taking, invoicing, billing, etc. can also be utilized for transactional operations. The control plan does not replace the information contained in detailed operator instructions. Since processes are expected to be continually updated and improved, the control plan is a living document, reflecting the current methods of control and measurement systems used.
SL316XCh11Frame Page 358 Monday, September 30, 2002 8:08 PM
358
Six Sigma and Beyond: The Implementation Process
• Development and implementation Developing a control plan • A basic understanding of the process must be obtained. Establish a multifunction team to gather and utilize appropriate available information, such as: • Process flow diagram • Failure mode and effects analysis (process and design) • Special characteristics (critical and significant characteristics) • Control plans/lessons learned from similar parts or processes • Team’s knowledge of the process • Technical documentation (design/process notices, MPIs, PM) • Validation plan results (DVP, EVP, PVP) • Optimization methods (QFD/DOE) • Develop the process flow diagram — map the process. • Develop the process FMEA. • Examine each process operation for potential problems and failures. • Focus on characteristics that are important to the customer and to product safety. • A PFMEA is required for most organizations for all new product processes. PFMEAs must be eventually developed for all existing product lines. If a PFMEA does not exist, then customer concerns/complaints must be considered when developing the control plan. • Develop a preliminary manufacturing control process (MCP), utilizing a standardized format. This format satisfies ISO 9000, ISO/TS16949, and QS-9000 requirements (and is the REQUIRED FORMAT!). • Conduct multifunctional team review for revision/consensus of the MCP. • Install MCP with change control approval. This will assign and display a document number, version number, issue date, and owner. • Implement the MCP. Update/revise manufacturing process instructions, control charts, gauge systems, etc. as required from the new control plan. • Benefits to developing and implementing CPs — improves overall quality by reducing chances of quality excursions. Reduces shrinkage or defects in MFG/transaction processes by keeping processes centered. Also, the data aids in timely troubleshooting of MFG/transaction processes, as well as a communication vehicle for changes to CTQ characteristics, control methods, etc. Quality system overview Control tools Continuous SPC tools The foundation of SPC Statistical process control
SL316XCh11Frame Page 359 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
359
Types of control charts — variable and attribute • Basic components of a control chart • Control limits • What are control limits? • What is meant by “in control” and “out of control?” • Link between control limits, hypotheses testing and specifications Variable control charts • Individual X vs. EWMA chart • X-bar and R charts • X-bar-s charts • Individual and moving range • EWMA chart • Control chart — interpretation • Control chart — nonnormal distribution Attribute control charts • p charts • np chart • c chart • u chart • Attribute chart interpretation Alternative methods of control • Precontrol • Zone control charting Process capability estimate Poka yoke — understand the use of poka yoke strategies in completing a black belt project. Know how to design and implement a poka yoke strategy. • What is poka yoke/error or mistake-proofing? • Mistake-proofing manufacturing processes • Mistake-proofing transactional processes • Types of mistake-proofing • Errors vs. defects • Types of human errors • “Red flag” conditions • Control/feedback logic • Guidelines for mistake-proofing • Mistake-proofing strategies • Advantages of mistake proofing Maintenance — a reliability function • Maintenance via Six Sigma is all-encompassing — transactional, information systems, production equipment, etc. Maintenance function should be linked to customer CTQs. It should address all six ms: machines, manpower, methods, materials, mother nature, and measurements. (Make sure you differentiate these from the classical nonmanufacturing items of policies, procedures, place, environment, measurement, and people.) Maintenance can and should be a reliability function, not just a repair function.
SL316XCh11Frame Page 360 Monday, September 30, 2002 8:08 PM
360
Six Sigma and Beyond: The Implementation Process
• Maintenance maximizes output, minimizes cost, and assures continued operation - customer satisfaction. Maintenance — integrated strategy • World-class key performance indicators • Predictive maintenance • Benefits to developing and implementing PMs • Major elements of preventive maintenance Realistic tolerancing — a simple graphical method for establishing optimum levels and appropriate tolerances for inputs. Once it is determined that a continuous output depends linearly on a continuous input, the output specification is used to create an input specification. Scatter plots and fitted line plots demonstrate association of inputs and outputs, not necessarily cause and effect. A realistic tolerancing method: Step 1: Identify the KPOV of interest note its specification. Choose KPIV. Step 2: Select the KPIV of interest. Define a range of values for the KPIV that will likely optimize the KPOV. Step 3: Run 30 samples over the range of the KPIV and record the output values. Step 4: Plot the results with the KPIV on the x-axis and the output on the y-axis. If the plot has a tilt with little vertical scatter, a relation exists. Proceed to Step 5. If there is no tilt, the KPIV has no relation to the response variable. Step 5: Determine the target value and tolerance of the KPIV. • Draw a best-fit line through the data. • Eliminate data point furthest from best-fit line. • Draw a parallel line through the next furthest point from the bestfit line. Draw a second parallel line equidistant from the best-fit line on the opposite side. The vertical distance between these two parallel lines represents 95% of the total effect of all other factors on the output other than the KPIV studied here. If specifications exist for the response variable, draw lines from those values on the yaxis to intersect the upper and lower confidence lines. • Drop two lines from these intersection points to the x-axis. The distance between where these intersect the x-axis represents the maximum tolerance permitted for the input variable. Step 6: Compare these values against the existing operating levels and implement necessary changes to SOP. Document changes via the FMEA and control plan. Gauge and measurement systems • Management plan • Long-term gauge control
SL316XCh11Frame Page 361 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
361
• Long-term gauge control is the management of the basis of our understanding of our process. Remember, the quality of our process cannot be understood and controlled without understanding the quality of our measurements. • Why do we need a long-term gauge plan? Long-term project control is dependent on measurement and analysis. The measurement system needs to be under control. • Who is responsible for the long term gauge plan? Those responsible for the process variables of interest. Gauge management incorporate it into the local quality system and ensures that future owners are trained to implement. • What is in a long-term gauge plan? 1. Initial baseline analysis 2. Ownership details 3. Calibration control (chart?) with instructions 4. Handling and storage requirements 5. Maintenance requirements — procedures and log 6. Spare parts requirements 7. ID/tracking system 8. Ongoing MSA requirements (product/product changes, gauge changes, operator changes, etc.) 9. Thorough documentation • What do you need to do to develop your long-term gauge plan? • Your gauge: • What was your initial baseline (GR&R) data? Is this gauge still appropriate? • What is the amount of bias in your gauge? Linearity? How will you control this bias? • Who “owns” and maintains the gauge? • Who calibrates your gauge? How frequently? • Which gauge would you use? • What are the handling and storage requirements for the gauge? • Who needs to maintain the gauge? What does this mean? • How do you maintain the gauge? What are the spare parts requirements? • How frequently and when should MSA be performed? By whom? • Which one should you use and when? • What documentation is required for the long-term gauge plan? • How will we manage this documentation? • What issues/roadblocks do I see in developing the long-term gauge plan? • Implementing gauge plans
SL316XCh11Frame Page 362 Monday, September 30, 2002 8:08 PM
362
Six Sigma and Beyond: The Implementation Process
SIX SIGMA GREEN BELT TRAINING — TECHNICAL Introduction Agenda Ground rules Exploring our values Objectives Definition (it must be emphasized that we will be applying the Six Sigma methodology rather than following the “pack”). Six Sigma goal Defect reduction Yield improvement Improved customer satisfaction and higher return on investment Comparison between three sigma and Six Sigma quality
SHORT HISTORICAL BACKGROUND The business case for implementing Six Sigma (After the definition, this item is very important. It must be understood by all before moving on to a new topic. It is the reason why Six Sigma is going to be implemented in your organization. Therefore, not only must it be understood, but in addition it must make sense and be believable. Sharing the executive committee members list with everyone is one of the ways to make individuals understand the importance of the implementation process. Another way is to provide some background about the black belts as individuals and their commitment to Six Sigma and to identify specific projects that plague the organization, either genuine financial problems or issues perceived as problems by customers.) Overview of the big picture Deployment structure Executive leadership (part-time basis): Executives are supposed to be the drivers of the Six Sigma process in directions that meet key business goals and address key customer satisfaction concerns. Master black belt (full-time basis). Master black belts are the experts of Six Sigma tools and methodologies. They are responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Project champions (part-time basis): Project champions are accountable for the performance of black belts and the results of Six Sigma projects in their area. They are the conduit between the executive leadership and the black belt. They are responsible for eliminating bottlenecks and conflicts as they pertain to projects, especially in projects with cross-functional responsibilities.
SL316XCh11Frame Page 363 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
363
Black belts (full-time basis): Black belts are responsible for leading and teaching Six Sigma processes within the company. They are also responsible for applying Six Sigma tools to complete a predetermined amount of projects worth at least $250,000 each. (Projects are commonly worth $400,000–$600,000). It is expected that the result will be a breakthrough improvement with a magnitude of 100X. Green belts (part-time basis): Green belts are expected to help black belts with expediting and completing Six Sigma projects. They may take the lead in small projects of their own. They should also look for ways to apply Six Sigma problem-solving methods within their work area. Rollout strategy (emphasize the importance of projects and measurement) Training requirements Black belts Green belts Project selection defines the project charter. This will provide the appropriate documentation for communicating progress and direction to the rest of the team but also to the champion. Identify customer The Y = f(X). The Y is the output and the Xs are the inputs. Identify the Y and determine the Xs. It is imperative to understand that most often a single Y may be influenced by more than one X. Therefore, we may have: Y = F(X1, X2…, Xn). However, that is not the end. We may even have a single X cascading into a further level, such that: Y = f(X1, X2,…, Xn) and for X1 we may have Y = f(x1, x2,…xn). This is called cascading. Apply project selection checklist. To ensure the selected issue will make a good Six Sigma project, a check list can be applied to verify the project’s potential. Simple criteria for selection are the following six questions: • Does the project have recurring events? • Is the scope of the project narrow enough? • Do metrics exist? Can measurements be established in an appropriate amount of time? • Do you have control of the process? • Does the project improve customer satisfaction? • Does the project improve the financial position of the company? If the answer to all of these questions is yes, then the project is an excellent candidate. Develop a high-level problem statement. This is a high-level description of the issue to be addressed by the green belt or black belt. The problem statement will be the starting point for the application of Six Sigma methodology.
SL316XCh11Frame Page 364 Monday, September 30, 2002 8:08 PM
364
Six Sigma and Beyond: The Implementation Process
THE DMAIC PROCESS The model: a structured methodology for executing Six Sigma project activities. Point out here that the model is not linear in nature. Quite often, teams may find themselves in multiple phases so that thoroughness is established. Define: the purpose is to refine the project team’s understanding of the problem to be addressed. It is the foundation for the success of both the project and Six Sigma. Measure: the purpose is to establish techniques for collecting data about current performance that highlights project opportunities and provides a structure for monitoring subsequent improvements. Analyze: the purpose is to allow the team to further target improvement opportunities by taking a closer look at the data. Improve: the purpose is to generate ideas about ways to improve the process; design, pilot, and implement improvements; and validate improvements. Control: the purpose is to institutionalize process/product improvements and monitor ongoing performance.
THE DMAIC MODEL
IN
DETAIL
Define It begins with a definition of the problem and ends with a completed project charter. Define problem: green belts may be required to formulate a high-level problem statement. Key points are: • Identify the problem. The definition must be such that the improvement is identifiable. • Understand the operational definitions. • Understand the potential metrics of the situation. • Identify through a rough trade-off analysis the positives and negatives of current performance and their relationship to the customer. • Recognize that the identification process may be an iterative process itself, which means that there may be revision in the future. Identify the customer. Determine who the “real” customer is. It may be an internal, external, or ultimate customer. The focal point here is to make sure that the customer — however defined — will benefit from solving this particular problem. Identify the critical to quality characteristics (CTQs). The focus here is to ensure that there is a link between the characteristics identified and customer satisfaction. They help in focusing the team on issues important to the customer. Here, the operational definitions will make a difference if they do not reflect accurate descriptions of what the customer really needs or wants. Also in this stage, a preliminary measurement thought process begins to identify whether or not a consistent interpretation and measurement is ensured. Specificity is the key point.
SL316XCh11Frame Page 365 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
365
• Make sure that the team identifies what matters to the customer. • Ask whether or not the voice of the customer (VOC) has been accounted for, through Kano modeling, quality function deployment (QFD), benchmarking, market analysis, internal intelligence, etc. • Prioritize the CTQs. Not all CTQs are important. They must be prioritize based on the following: • Critical priority — items that will cause customer dissatisfaction unless these characteristics are functional. • Performance — items that improve performance. • Delighter — items that may not be crucial to the process improvement but will delight the customer. Define (map) the process. A preliminary flow chart of the process is highly recommended here. • Map the supplier, input, process, output, customer (SIPOC) model. • Understand the meaning and contribution of each SIPOC element. • Differentiate between the “is” and the “could be” processes. • Identify the essential elements and the merely desirable elements. • Identify the “hidden factory” — the hidden factory is the work that has been done but not counted. Refine the project scope. Further specify project concerns; develop micro problem statement based on the new SIPOC model of the process. Think of suspected sources of variation by using: • Brainstorming • 5 whys • Cause-and-effect diagram and matrix Update project charter. This is the deliverable of the define stage. Additions and modifications are appropriate at this stage based on ALL information gained from the define stage. A project charter should include: • The statement of the problem — concise, clear, and measurable • The project scope • The business case • The project plan and milestones • The goal and the expected results • Roles and responsibilities Measure This establishes techniques for collecting data about the current performance of the process identified in the define stage. It begins with identifying the measurement and variation in the process and ends with a capability analysis. Measure is very important and has been recognized as such for a very long time. It is interesting to review the comments of Fourier (Adler, 1982, p. 537). In developing the theory of heat, he enumerates five quantities that in order to be numerically expressed require five different kinds of units, “namely, the unit of length, the unit of time, that of temperature, that of weight, and finally the unit which serves to measure quantities of heat.” To which he adds the remark that “every undermined magnitude or constant
SL316XCh11Frame Page 366 Monday, September 30, 2002 8:08 PM
366
Six Sigma and Beyond: The Implementation Process
has one dimension proper to itself, and that the terms of one and the same equation could not be compared, if they had not the same exponent of dimension.” Identify measurement and variation. A measure describes the quantity, capacity, or performance of a product, process, or service based on observable data. However, that measure may be different depending on where we measure, who is doing the measurement, and what kind of measuring instrument is used. • Identify the sources of variation and the impact on performance. • Identify different measures and the criteria for establishing good process measures. • Identify the different kind of data that are available and their important contribution. • Explain why one should measure • Concept of variation • Common causes • Special causes • Shift and drift • Sources of variation • Machines • Material • Method • Measurement • Mother nature • People • Measurement usage • Measurement of inputs • Measurement of process • Measurement of outputs Complete the appropriate items of the FMEA. Determine data type. The type is determined by what is measured, that is, attribute data or variable data. Develop data collection plan. Consider the following: • Purpose of data • Collection plan • Stratification plan • Checksheets • Sampling • Sample size Perform measurement system analysis. Measurement System Analysis (MSA) is a quantitative evaluation of the tools and process used in making data observations. • Types of MSA • Operational definitions — used generally for nonmanufacturing applications • Walking the process — used for nonmanufacturing applications
SL316XCh11Frame Page 367 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
367
• Gauge R&R — used for variable or attribute data • Repeatability • Reproducibility • Using the control chart method • Using the ANOVA method • Understand the concept of component variation in relationship to the gauge R&R MSA fails. If the MSA fails at this stage, DO NOT collect more data. YOU MUST fix the problem before proceeding. Perform data collection. This depends on the collection plan that you have defined. The better the collection plan, the better the data collected. Data collection is a process by which we accumulate enough information to identify the potential cause of the problem. Perform capability analysis. Capability analysis is the study of how well a process is performing in meeting the expectations of customers (CTQs). • Understand the difference between short- and long-term capability. • Calculate capability for attribute and variable data. • Calculate capability for normally and nonnormally distributed data. • Calculate yield in a process. • Calculate capability using a software program. Analyze In this stage, the focus is to target improvement opportunities by taking a closer look at the data. Review analysis tools and apply the knowledge gained. The idea here is to help you identify “gaps” in the data-collection plan that require additional information. Also, at this point we may find that a solution requires further analysis before implementation. Tools usually reviewed for this purpose are: Pareto chart, run chart, box plot, histogram, scatter plot, and run charts. Identify sources of variation. The purpose here is to target root causes that project teams validate and verify with observation, data, or experiment. Improve The idea here is to allow the project team to develop, implement, and validate improvement alternatives that will achieve desired performance levels as defined by CTQs. Generate improvement alternatives. The idea here is to come up with alternatives to test as improvements to the problem’s root cause. Two techniques are usually used here: a) brainstorming and b) DOE. • Criteria for improvement — quality, time, cost • Refining improvement criteria • Evaluating improvements
SL316XCh11Frame Page 368 Monday, September 30, 2002 8:08 PM
368
Six Sigma and Beyond: The Implementation Process
Pilot. A pilot is a trial implementation of a proposed improvement conducted on a small scale under close observation. Validate improvement. To validate improvements, use data collected from the pilot and calculate the sigma value. Compare this value to the value that you had calculated in the analysis stage for capability. Create “should be” process map. This process map will provide a tool to explain the improvement to others and to guide the implementation efforts. Since this is the new and revised process, it should be different than the original process map of the define stage. Update FMEA. Using information from the “should be” process the resulting portion of the FMEA should be completed. Perform cost–benefit analysis: A cost–benefit analysis is a structured process for determining the trade-off between implementation costs and anticipated benefits of potential improvements. Control The final link of the DMAIC model is the control stage. Its purpose is to institutionalize process/product improvements and monitor ongoing performance in order to sustain the gains achieved in the improve stage. • Develop control strategy • Prevention vs. detection • Mistake-proofing • Control charts • Understand the difference between limits and specifications. • Understand the concept of long-term MSA. Develop control plan. A control plan provides a written summary description of the system for controlling parts and processes. In essence, the control plan is a reflection of the decisions made in the development of the control strategy. Furthermore, the control plan is used by the process owner as a reaction plan in case something goes wrong with the process. Update standard operating procedures (SOPs) and training plan. The final step of the control stage is to update all the relevant documentation including the standard operating procedures. This update should include all revised process steps and control measures.
SIX SIGMA GREEN BELT TRAINING — MANUFACTURING Introductions: Name Title or position Organization Background in quality improvement programs, statistics, etc. Hobbies/personal information
SL316XCh11Frame Page 369 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
369
Agenda Ground rules Exploring our values Six Sigma overview • Six Sigma focus • Delighting the customer through flawless execution • Rapid breakthrough improvement • Advanced breakthrough tools that work • Positive and deep culture change • Real financial results that impact the bottom line What is Six Sigma? • Performance target • Practical meaning • Value • A problem-solving methodology Vision Philosophy Aggressive goal — metric (standard of measurement) • Benchmark • Method — how are we going to get there? • Customer focus • Breakthrough improvement • Continual improvement • People involvement Bottom line: Six Sigma defines the goals of the business • Defines performance metrics that tie to the business goals by identifying projects and using performance metrics that will yield clear business results. Applies advanced quality and statistical tools to achieve breakthrough financial performance. The Six Sigma strategy • Which business function needs it? • Is your leadership on board? • Fundamentals of leadership • Challenge the process • Inspire a shared vision • Enable others to act • Model the way • Encourage the heart • Six Sigma is a catalyst for leaders The breakthrough phases The DMAIC model — high-level overview This drives breakthrough improvement The foundation of the Six Sigma tools • Cost of poor quality • What is cost of poor quality? In addition to the direct costs associated with finding and fixing defects, cost of poor quality also includes:
SL316XCh11Frame Page 370 Monday, September 30, 2002 8:08 PM
370
Six Sigma and Beyond: The Implementation Process
•
The hidden cost of failing to meet customer expectations the first time • The hidden opportunity for increased efficiency • The hidden potential for higher profits • The hidden loss in market share • The hidden increase in production cycle time • The hidden labor associated with ordering replacement material • The hidden costs associated with disposing of defects Getting there through inspection • Defects and the hidden factory • Rolled-throughput yield vs. first-time yield What causes defects? Excess variation due to a) manufacturing processes, b) supplier (incoming) material variation, and c) unreasonably tight specifications (tighter than the customer requires). Dissecting process capability — premise of Six Sigma: sources of variation can be a) identified and b)quantified. Therefore, they can be controlled or eliminated. How do we improve capability? Six Sigma: metrics and continual improvement • Six Sigma is characterized by a) defining critical business metrics, b) tracking them, and c) improving them using proactive process improvement. Six Sigma’s primary metric is defects per unit, which is directly related to rolled-throughput yield (Yrt) • Yrt = e-dpu • Cost of poor quality and cycle time (throughput) are two other metrics • Continual improvement • Calculating the product sigma level Metrics • Defects per unit (DPU) drives plant-wide improvement • Defects per million opportunities (DPMOs) allow for comparison of dissimilar products • Sigma level allows for benchmarking within and across companies • Tracking trends in metrics • Harvesting the fruit of Six Sigma • PPM conversion chart Translating needs into requirements Deployment success: if and ONLY if Six Sigma • Directly affects quality, cost, cycle time, and financial results • Focuses on the customer and critical metrics • Directly attacks variation, defects, and the hidden factory • Ensures a predictable factory • Establishes black belt execution strategy with the support of management Roles and responsibilities
SL316XCh11Frame Page 371 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
371
Executive management: • Will set meaningful goals and objectives for the corporation • Will drive the implementation of Six Sigma publicly Champion: • Will select black belt projects consistent with corporate goals • Will drive the implementation of Six Sigma through public support and removal of barriers Master black belt: • The expert in Six Sigma tools and methodologies. • Responsible for training and coaching black belts. Master black belts, or shoguns as we call them, may also be responsible for leading large projects on their own. Black belt: • Ensures sources of variation in manufacturing and transactional processes are objectively identified, quantified, and controlled or eliminated. How? By using the breakthrough strategy, process performance is sustained through well developed, documented, and executed process control plans such as defining the goal and identifying the model to use. • Goal: to achieve improvements in rolled-throughput yield, cost of poor quality, and capacity-productivity. • To deliver successful projects using the breakthrough strategy • To train and mentor the local organization on Six Sigma • The model • Kano model • QFD — house of quality • D-M-A-I-C Green belt: • Will deliver successful localized projects using the breakthrough strategy. Six Sigma instructor: • Will make sure every black belt candidate is certified in the understanding, usage, and application of Six Sigma tools. Describe BB execution strategy • To overview the steps • To overview the tools • To overview the deliverables • To discuss the role of the black belt
PHASES
OF
PROCESS IMPROVEMENT
The Define Phase Who is the customer? • What does the customer want? • How can the organization benefit from fixing a problem?
SL316XCh11Frame Page 372 Monday, September 30, 2002 8:08 PM
372
Six Sigma and Beyond: The Implementation Process
A simple QFD (quality function deployment) tool used to emphasize the importance of understanding customer requirements, the CTs (critical tos) — CTCost, CTDelivery, CTQuality. The tool relates the Xs and Ys (customer requirements) using elements documented in the process map and existing process expertise. Result: a Pareto of Xs that are used as input into the FMEA and control plans. These are the CTPs, critical to the process — anything that we can control or modify about our process that will help us achieve our objectives. The Measurement Phase Establish the performance baseline. The measure phase — IMPORTANT!!! A well defined project results in a successful project. Therefore, the problem statement, objective, and improvement metric need to be aligned. If the problem statement identifies defects as the issue, then the objective is to reduce defects, and the metric to track the objective is “defects.” This holds true for any problem statement, objective, and metric (% defects, overtime, RTY, etc.). • Primary metric — a green belt needs to be focused; if other metrics are identified that impact the results, identify these as secondary metrics, i.e., reducing defects is the primary improvement metric, but we do not want to reduce line speed (line speed is the secondary metric). • Project benefits — do not confuse projected project benefits with your objective. Make sure you separate these two items. There are times when you may achieve your objective yet not see the projected benefits. This is because we cannot control all issues. We need to tackle them in a methodical order. Purpose of measurement phase • Define the project scope, problem statement, objective, and metric. • Document the existing process (using a process map, C&E matrix, and a FMEA). • Identify key output variables (Ys) and key input variables (Xs). • Establish a data-collection system for your Xs and Ys if one does not exist. • Evaluate measurement system for each key output variable. • Establish baseline capability for key output variables (potential and overall). • Document the existing process. Establish data-collection system • Determine if you have a method by which you can effectively and accurately collect data on your Xs and Ys in a timely manner. If this is not in place, you will need to implement a system. Without a system in place, you will not be able to determine whether you are making any improvements in your project.
SL316XCh11Frame Page 373 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
373
• Establish this system such that you can historically record the data you are collecting. • This information should be recorded in a database that can be readily accessed. • The data should be aligned in the database in such a manner that for each output (Y) recorded, the operating conditions (X) are identified. This becomes important for future reference. • This data-collection system is absolutely necessary for the control phase of your project. Make sure all those who are collecting data realize its importance. Measurement Systems Analysis Purpose: to determine whether the measurement system, defined as the gauge and operators, can be used to precisely measure the characteristic in question. We are not evaluating part variability, but gauge operator capability. • Guidelines • Determines the measurement capabilities for Ys • Needs to be completed before assessing capability of Ys • These studies are called: gauge repeatability and reproducibility (GR&R) studies, measurement systems analysis (MSA) or measurement systems evaluation (MSE) • Indices: • Precision to tolerance (P/T) ratio = proportion of the specification taken up by measurement error. Ten percent is desirable • Precision to total variation (P/TV) ratio (%R&R) = proportion of the total variability taken up by measurement error. Thirty percent is marginal Capability studies: used to establish the proportion of the operating window taken up by the natural variation of the process. Short-term (potential) and longterm (overall) estimates of capability indices are taught. (The reader may want to review Volume 1 or Volume 4 for the discussion on long and short capability.) • Indices used assuming process is centered: Cp, Pp, Zst • Indices used to evaluate shifted process: Cpk, Ppk, Zlt Measure: potential project deliverables • Project definition • Problem description • Project metrics • Process exploration: • Process flow diagram • C&E matrix, PFMEA, fishbones • Data-collection system
SL316XCh11Frame Page 374 Monday, September 30, 2002 8:08 PM
374
Six Sigma and Beyond: The Implementation Process
• Measurement System(s) Analysis (MSA): • Attribute/variable gauge studies • Capability assessment (on each Y) • Capability (Cpk, Ppk, σ level, DPU, RTY) • Graphical and statistical tools • Project summary • Conclusions • Issues and barriers • Next steps • Completed local project review
THE ANALYSIS PHASE • Purpose of the analysis phase • To identify high-risk input variables (Xs) from the failure modes and effects analysis (FMEA). • To reduce the number of process input variables (Xs) to a manageable number via hypothesis testing and ANOVA techniques. • To determine the presence of and potential elimination of noise variables via multi-vari studies. • To plan and document initial improvement activities. • Failure modes and effects analysis: • Documents effects of failed key inputs (Xs) on key outputs (Ys). • Documents potential causes of failed key input variables (Xs). • Documents existing control methods for preventing or detecting causes. • Provides prioritization for actions and documents actions taken. • Can be used as the document to track project progress. Multi-vari studies: study process inputs and outputs in a passive mode (natural day-to-day variation). Their purpose is: • To identify and eliminate major noise variables (machine to machine, shift to shift, ambient temperature, humidity, etc.) before moving to the improvement phase. • To take a first look at major input variables. • To help select or eliminate variables for study in designed experiments. • To identify vital few Xs. • To determine the governing transformation equation. Analyze: potential project deliverables • Project definition • Problem description • Project metrics • Passive process analysis: • Graphical analysis • Multi-vari studies
SL316XCh11Frame Page 375 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
375
• Hypothesis testing • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review The Improvement Phase • DOE (design of experiments) is the backbone of process improvement. • From the subset of vital few Xs, experiments are designed to actively manipulate the inputs to determine their effect on the outputs (Ys). • This phase is characterized by a sequence of experiments, each based on the results to the previous study. The intent is to generate improvement alternatives. • Identify “critical” variables during this process. • Usually three to six Xs account for most of the variation in the outputs. • Control and continuous improvement. • Perform a pilot. • Validate the improvement. • Create the “should be” process map. • Update the FMEA. • Perform preliminary cost–benefit analysis. Improve: potential project deliverables • Project definition: • Problem description • Project metrics • Design of experiments: • DOE planning sheet • DOE factorial experiments • Y = F (x1, x2, x3, …) • Updated PFMEA • Project summary: • Conclusions • Issues and barriers • Next steps • Completed local project review The Control Phase • • • •
Implement long-term control strategy and methods. Develop an execution plan. Optimize, eliminate, automate, and control vital few inputs. Document and implement the control plan.
SL316XCh11Frame Page 376 Monday, September 30, 2002 8:08 PM
376
Six Sigma and Beyond: The Implementation Process
• Sustain the gains identified. • Reestablish and monitor long-term delivered capability. • Implement continuous improvement efforts. (This is perhaps the key responsibility of all the green belts in the functional area.) • Establish execution strategy support systems. • Learn safety requirements. • Define maintenance plans. • Establish system to track special causes. • Draw up required and critical spare parts list. • Create troubleshooting guides. • Draw up control plans. • Create SPC charts. • Buy process monitors. • Set up inspection points. • Set up metrology control. • Set workmanship standards. • Others? Control: potential project deliverables • Project definition: • Problem description • Project metrics • Optimization of Ys: • Monitoring Ys • Eliminating or controlling Xs • Sustaining the gains: • Updated PFMEA • Process control plan • Action plan • Project summary: • Conclusions • Issues and barriers • Final report • Completed local project review Additional items of discussion. The following items should be discussed at the appropriate and applicable complexity level of the participants. In some cases, some of the following items may be just mentioned but not discussed. Rolled-throughput yield • The classical perspective of yield • Simple first-time yield = traditional yield • Measuring first-pass yield Normalized yield • Complexity is a measure of how complicated a particular good or service is. Theoretically, complexity will likely never be quantified in an exacting manner. If we assume that all characteristics are independent
SL316XCh11Frame Page 377 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
• • • • • •
377
and mutually exclusive, we may say that complexity can be reasonably estimated by a simple count. This count is referred to as an opportunity count. In terms of quality, each product or process characteristic represents a unique opportunity to either add or subtract value. Remember, we only need to count opportunities if we want to estimate a sigma level for comparisons of goods and services that are not necessarily similar. Formulas to know Hidden factory Take away — rolled-throughput yield Integrates rework loops Highlights “high-loss steps… Put project emphasis here!
DPMO, counting opportunities: Nonvalue-add rules: an opportunity count should never be applied to any operation that does not add value. Transportation and storage of materials provide no opportunities. Deburring operations do not count either. Testing, inspection, gauging, etc. do not count. The product in most cases remains unchanged. An exception: an electrical tester where the tester is also used to program an EPROM. The product was altered and value was added. Supplied components rules: each supplied part provides one opportunity. Supplied materials, such as machine oil, coolants, etc., do not count as supplied components. Connections rules: each “attachment” or “connection” counts as one. If a device requires four bolts, there would be an opportunity of four, one for each bolt connected. A sixty-pin integrated circuit, SMD, soldered to a PCB counts as sixty connections. (Sanity check rule: “Will applying counts in these operations take my business in the direction it is intended to go?” If counting each dimension checked on a CMM inflates the denominator of the equation, adds no value, and increases cycle time when the company objective is to take cost out of the product, then this type of count would be opposed to the company objective. Hence, it would not provide an opportunity. Once you define an opportunity, however, you must institutionalize that definition to maintain consistency. This opportunity, if it is good enough for the original evaluation, must also be good enough to be evaluated at the end of the project. In other words, the opportunity count must have the same base; otherwise it is meaningless.) Introduction to data • Description and definitions • What do you want to know? • Discrete vs. continuous data • Categories of scale • Nominal scale — nominal scales of measure are used to classify elements into categories without considering any specific property. Examples of nominal scales include “causes” on fishbone diagrams, yes/no, pass/fail, etc. • Ordinal Scale — ordinal scales of measure are used to order or rank nominal (pass/fail) data based on a specific property. Examples of
SL316XCh11Frame Page 378 Monday, September 30, 2002 8:08 PM
378
Six Sigma and Beyond: The Implementation Process
ordinal scales include relative height, Pareto charts, customer satisfaction surveys, etc. • Likert scale (ordinal) — example rating scale ranges: five-point school grading system (A B C D E); seven-point numerical rating (1 2 3 4 5 6 7); verbal scale (excellent, good, average, fair, poor). • Interval and ratio scale — interval scales of measure are used to express numerical information on a scale with equal distance between categories, but no absolute zero. Examples are: temperature (°F and °C), a dial gauge sitting on top of a gauge block, comparison of differences, etc. Ratio scales of measure are used to express numerical information on a scale with equal distance between categories, but with an absolute zero in the range of measurement. • A tape measure, ruler, position Vs time at constant speed, and so on. Selecting Statistical Techniques At this point of the discussion the instructor may want to introduce a computer software package to facilitate the discussion of statistical tools. Key items of discussion should be: • Entering data into the program • Cutting and pasting • Generating random numbers • Importing and exporting data from databases, Excel, ASCII, etc. • Pull-down menus of the software (for general statistics, graphs, etc.) • Manipulate and change data • Basic statistics and probability distributions • Calculate the z scores and probability • Calculate capability • Control charts Discussion and practice of key statistical techniques and specific tools Basic statistics • Mean, median, mode, variance, and standard variation • Distributions • Normal, Z-transformation, normal and nonnormal probability plots, nonnormal, Poison, binomial, hypergeometric, t-distribution • Central limit theorem — very important concept. Emphasis must be placed on this theorem because it is the fundamental concept (backbone) of inferential statistics and the foundation for tools to be learned later this session. The central limit theorem allows us to assume that the distribution of sample averages will approximate the normal distribution if n is sufficiently high (n > 30 for unknown distributions). The central limit theorem also allows us to assume that the distributions of sample averages of a normal population are themselves normal, regardless of sample size. The SE mean shows that as sample size increases, the standard deviation of the sample means decreases. The standard error will help us calculate confidence intervals. Confidence
SL316XCh11Frame Page 379 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
379
intervals (CIs) are derived from the central limit theorem and are used by black belts to quantify a level of certainty or uncertainty regarding a population parameter based on a sample. • Degrees of freedom • Standard error • Confidence Parametric confidence intervals — the parametric confidence intervals assume a t-distribution of sample means and uses this to calculate confidence intervals. Confidence intervals for proportions — confidence intervals can also be constructed for fraction defective (p), where x = number of defect occurrences; n = sample size and p = x/n = proportion defective in sample. For cases in which the number defective (x) is at least 5 and the total number of samples n is at least 30, the normal distribution approximation can be used as a shortcut. For other cases, the binomial tables are needed to construct this confidence interval. • Accuracy and precision • Defects per million • Population vs. sample • Sampling distribution of the mean • Concept of variation • Additive property of variances • Attribute or variable Types of data — variable and attribute • Rational subgroups • Data-collection plan — your data-collection plan and execution will make or break your entire project!!!!!!!!!!! Data-collection plan — ask yourself the following questions: • What do you want to know about the process? • What are the likely causes of variation in the process (Xs)? • Are there cycles in the process? • How long do you need to collect data to capture a true picture? • Who will be collecting the data? • How will you test your measurement system? • Are the operational definitions detailed enough? • How will you display the data? • Is data available? If not, how will you prepare data collection sheets? • Where could data collections occur? What are your correction plans? Process capability and performance • Process capability • Capability • Process characterization • Converting DPM to a Z value • Short-term vs. long-term • Indicating the spread
SL316XCh11Frame Page 380 Monday, September 30, 2002 8:08 PM
380
Six Sigma and Beyond: The Implementation Process
• Indicates the spread and center • Indicates spread and centering • Process shift — how much should we expect? Is 1.5σ enough? Where does it come from? • The map to the indicators and what do they mean Stability • Process control • Pooled vs. total variation • Short-term vs. long-term • Which standard deviation? • Area of improvement • What is good? Measurement system analysis • Why MSA? How does variation relate to MSA? • Measurement systems • Resolution • Bias • Accuracy vs. precision • Linearity Measurement tools • A simple gauge • Calibration • Consistency • Gauge R& R • GR&R with ANOVA • Indices (Cp, Cpk, Pp, Ppk) • Cp is the “potential” capability of your process assuming you are able to eliminate all nonrandom causes. In addition, Cp assumes the process is centered. This metric is also called “process entitlement” or the best your process could ever hope to perform in the short term. In order to calculate this metric you need a close approximation for short-term standard deviation (which is not always available). • Cpk and Ppk use the mean, not only the tolerance band, to estimate capability. The term CPKmin(Cpklower, Cpkupper) is stated as the shortest numerical distance between the mean and the nearest spec limit. How do you know if your gauge is good enough? Introduce definition of quality (ISO 8402) Control charts • Variable and attribute (X-bar and s, X-bar and R, IndX and MR, p, c, etc.) • Multi-vari charts: the purpose of these charts is to narrow the scope of input variables and, therefore, to identify the inputs and outputs (KPIVs and KPOVs)
SL316XCh11Frame Page 381 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
381
HYPOTHESIS TESTING INTRODUCTION Why learn hypothesis testing? Hypothesis testing employs data-driven tests that assist in the determination of the vital few Xs. Black belts use this tool to identify sources of variability and establish relationships between Xs and Ys. To help identify the vital few Xs, historical or current data may be sampled. (Passive: you have either directly sampled your process or have obtained historic sample data. Active: you have made a modification to your process and then sampled. Statistical testing provides objective solutions to questions that are traditionally answered subjectively. Hypothesis testing is a stepping stone to ANOVA and DOE.) • Hypothesis testing terms that you need to remember • Steps in hypothesis testing: • Hypothesis testing roadmap • Hypothesis testing description • The null and alternate hypotheses • The hypothesis testing form • Test for significance • Significance level • Alpha risk — this alpha level requires two things: a) an assumption of no difference (Ho) and b) a reference distribution of some sort — producer’s risk • Beta risk — consumer’s risk
PARAMETERS
VS.
STATISTICS
Parameters deal with populations and are generally denoted with Greek letters. Statistics deal with samples and are generally denoted with English letters. There is no substitute for professional judgment. It is true that in hypothesis testing we answer the practical question: “Is there a real difference between _____ and _____ ?” However, we use relatively small samples to answer questions about population parameters. There is always a chance that we selected a sample that is not representative of the population. Therefore, there is always a chance that the conclusion obtained is wrong. With some assumptions, inferential statistics allows us to estimate the probability of getting an “odd” sample. This lets us quantify the probability (P value) of a wrong conclusion. What is signal-to-noise ratio? Managing change Measures and rewards An introduction to graphical methods • Pareto • Histogram • Run chart • Scatter plot • Correlation vs. causality
SL316XCh11Frame Page 382 Monday, September 30, 2002 8:08 PM
382
Six Sigma and Beyond: The Implementation Process
• Boxplot • Hypothesis tests for means • Comparison of means t Distribution Hypothesis testing for attribute data Useful definitions Hypothesis tests: proportions Chi-square test for independence Chi-square test Chi-square test for a relationship ANOVA Why ANOVA?
INTRODUCTION
TO
DESIGN
OF
EXPERIMENTS
What is experimental design? Organizing the way in which one changes one or more input variables (Xs) to see if any of them, or any combination of them, effects the output (Y) in a significant way. A well-designed experiment eliminates the effect of all possible Xs except the ones that you changed. Typically, if the output variable changes significantly, it can be tied directly to the input X variable that was changed and not to some other X variable that was not changed. The real power of experimentation is that sometimes we get lucky and find a combination of two or more Xs that make the Y variable perform even better! • Benefits of DOE • Why not one factor at a time? • Types of experiments • Classes of DOE • Terms used in DOE • Main effects and interactions • Contrast • Yates standard order • Run order for a DOE • Strategy of experimentation • Barriers to effective experimentation Focus on the X-Y relationship Trial and error One factor at a time Full factorial experiment Things to watch for in experiments Randomization • Repetition and replication • 2-K factorials • Advantages of 2k factorials • Standard order of 2k designs • Interactions
SL316XCh11Frame Page 383 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
383
• Interaction effects • Interactions for the three-way design • Main effects • Cube plots • Types of 2k factorials Center points and blocking • Adding center points • In two-level designs, there is a risk of missing a curvilinear relationship. Inclusion of center points is an efficient way to test for curvature without adding a large number of extra runs. • Confounding and blocking • Residuals analysis: • Residuals
SCREENING DESIGNS These designs are a powerful tool at analyzing multiple factors and interactions. The designs combine the flexibility of reduced run size without compromising information. One word of caution: do not reduce the experiment too far. By doing fewer runs, you may not obtain the desired level of information. Factorial experiments — the success of fractional factorials is based on the assumption that main effects and lower order interactions are generally the key factors. Full factorials can usually be derived from a fractional factorial experiment once nonsignificant factors are eliminated. • Fractional factorials • Design resolution • Choosing a design • Notation • Alias structure Planning experiments • Team involvement • Maximize prior knowledge • Identify measurable objectives • FMEA on all steps of the execution • Replication and repetition consideration • Verify and validate data collection and analysis procedures Steps to experimentation • Define the problem. What is the objective of the experiment? • Establish the objective. • Select the response variables. • Select the independent variables. • Choose the variable levels. • Select the experimental design. • Sequential experimentation • Select experimental design
SL316XCh11Frame Page 384 Monday, September 30, 2002 8:08 PM
384
Six Sigma and Beyond: The Implementation Process
• Screening/fractional factorial • Full factorial/partial • Consider the sample plan: how many runs can we afford? (The more runs or samples, the better understanding and confidence in the result.) How are we controlling noise and controllable variables that we know about? • What is our plan for randomization? • Walk through the experiment • Collect data • Analyze data • Draw statistical conclusions • Replicate results • Draw practical solutions Implement solutions • Understand the current process. • Is output qualitative or quantitative? • (A vs. B) or (50 vs. 100) ? • What is the baseline capability? • Is your process under statistical control? • Is the measurement system adequate? • Factor selection • Which factors (KPIV’s) do we include? • Where should they come from? • Process map • Cause and effects matrix • FMEA • Multi-vari study results • Brainstorming (fishbone) • Process knowledge • Operator experience • Customer/supplier input • Level selection. After the test factors are identified, we must set the levels of those factors we want to test. What is the right level differentiation to obtain the information needed? If the levels are too wide or narrow, nothing will be gained. Level guideline: 20% above and below the specs. If no specs, +/– 3 sigma from the mean. • What will the experiment cost? • Are all of the necessary players involved (informed)? • How long will it take? • How are we going to analyze the data? • Have we planned a pilot run and walked through the process? • Has the necessary paperwork been completed? • Make sure the MSA has been validated. • Budget and timelines (The goal in DOE: to find a design that will produce a specific desired amount of information at a minimum cost to the company.)
SL316XCh11Frame Page 385 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
385
Four phases of designed experiments: • Planning: careful planning involves clearly defining the problem of interest, the object of the experiment, and the environment in which the experiment will be carried out. • Screening: initial experiments aim to reduce the number of potentially influential variables to a vital few. Screening allows us to focus process improvement efforts on the most important variables. Screening designs include two-level full and fractional factorials, general full factorials, and Plackett-Burman. • Optimization: after we have identified the vital few variables by screening, we need to determine the best values in order to optimize a process; for example, we may want to maximize a yield or reduce product variability. Optimization designs include full factorial designs (two-level and general) and response surface designs (central composite and Box-Behnken). • Verification: we can perform a follow-up experiment at the predicted best process conditions to confirm optimization results. Fractional factorial designs Purpose: to determine which main effects (factors) are important. Key features: 1. Know which resolution you are running: always two-level factorials. 2. Useful to estimate mostly main effects (not interactions). 3. They can be built up to a higher-order blocked factorial design. 4. Limited to 15 runs. 5. Don’t expect more than what the design will provide. Recommendation: use these designs when you need to narrow down the list of important factorials. They are easy to interpret and cost effective. Screening designs (full or fractional) Purpose: to investigate how seven factors or less interact to drive a process. Key features: 1. Two-level factorials. Resolution IV, V, or higher. 2. General full factorials. 3. These allow estimation of at least two-way interactions. 4. They can model weak curvature through center points and can be built up into a response surface (blocked central composite) design to model more pronounced curvature. 5. They provide direction for further experimentation in search of an optimal solution. Recommendation: this is the design most often used in industry. They are good, low-cost, all-purpose designs. Response surface designs Purpose: to model responses that exhibit quadratic (curvilinear) relationships with the factors. Key features: 1. Recommended for nonsequential experiments. (Only one shot!) 2. Use when extreme combinations cannot be run. 3. Excellent for optimizing since curvature is typically seen around optimal.
SL316XCh11Frame Page 386 Monday, September 30, 2002 8:08 PM
386
Six Sigma and Beyond: The Implementation Process
4. Designs are costlier (more runs). Factors of interest should be low in number. 5. These can be used to minimize variation. 6. These can be used to put the process on target, maximize, or minimize a measure of interest. How do I sustain the improvement? Tools to assure process remains in control Keys to success • Early involvement of all work cell/department members. • Update all affected parties (including supervisors/managers regularly). • Get buy-in — no surprises! • Poka yoke the process. • Establish frequent measurement. • Establish procedures for the new/updated process. • Train everyone — assign responsibilities. • Monitor the results. How do I transition my project? • Assure your project is complete enough to transition. • No loose ends — a plan (project action plan) for everything not finalized. • Start early in your project to plan for transitioning. • Identify team members at start of project. • Remind them they are representatives of a larger group. • Communicate regularly with people in impacted area. • Display your project in impacted area during all phases. Remember, no surprises. • Hold regular updates with impacted area assuring their concerns are considered by your team. • When possible, get others involved to help. • Data collection. • Idea generation (brainstorming events). • What is a project action plan? It is a documented communication tool (contract) which allows you to identify: • What is left to do to complete your project? • Who is responsible to carry out each task? • When they should have it complete? • How it should be accomplished? Do I have to have one? Only if there are unfinished tasks to your improvement process that you expect others to carry out after the transition. (The tasks must be negotiated and agreed to.) Who will monitor the plan for implementation/completion? Both you and the responsible supervisor/manager who assumes ownership. Who has ultimate responsibility? The owner of each task and the responsible supervisor/manager.
SL316XCh11Frame Page 387 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
387
Product changes • Revise drawings by submitting EARs. • Work with process, test, and product engineers. Process changes • Physically change the process flow (5S the project area). To ensure your gains are sustainable you must start with a firm foundation. 5S standards are the foundation that supports all the phases of Six Sigma manufacturing. The foundation of a production system is a CLEAN and SAFE work environment. Its strength is dependent upon employee commitment to maintaining it. • Develop visual indicators. Create a visual factory. • Establish/buy new equipment to aid assembly/test. • Poka yoke wherever possible including forms. • Procedures (standardized work instructions). • Develop new procedures or revise existing ones. • Notify quality assurance of new procedure to incorporate in internal audits. • Provide QA a copy of standardized work instructions. • Measurements (visual indicators). • Build into process the posting of key metric updates. • Make it part of someone’s regular job to do timely evaluations. • Make it someone’s job to review the metric and take action when needed. • Training — train everyone in the new process. (Don’t leave until there is full understanding.)
CONTROL PLANS The control plan provides a written summary description of the system for controlling parts and processes; it is used to minimize process and product variation and describes the actions that are required at each phase of the process including receiving, in-process, final assembly, and shipping, to ensure that all process outputs will be in a state of control. A control plan for operational actions such as ordering, order taking, invoicing, billing, etc. can also be utilized for transactional operations. The control plan does not replace the information contained in detailed operator instructions. Since processes are expected to be continually updated and improved, the control plan is a living document, reflecting the current methods of control and measurement systems used. • Development and implementation Developing a control plan • A basic understanding of the process must be obtained. Establish a multifunction team to gather and utilize appropriate available information, such as: • Process flow diagram • Failure mode and effects analysis (process and design)
SL316XCh11Frame Page 388 Monday, September 30, 2002 8:08 PM
388
Six Sigma and Beyond: The Implementation Process
• • • • • • • • • •
Special characteristics (critical and significant characteristics) Control plans/lessons learned from similar parts or processes Team’s knowledge of the process Technical documentation (design/process notices, MPIs, PM) Validation plan results (DVP, EVP, PVP) Optimization methods (QFD/DOE) Develop the process flow diagram — map the process. Develop the process FMEA. Examine each process operation for potential problems and failures. Focus on characteristics that are important to the customer and to product safety. • A PFMEA is required for most organizations for all new product processes. PFMEAs must be eventually developed for all existing product lines. If a PFMEA does not exist, then customer concerns/complaints must be considered when developing the control plan. • Develop a preliminary manufacturing control process (MCP), utilizing a standardized format. This format satisfies ISO 9000, ISO/TS 16949, and QS-9000 requirements (and is the REQUIRED FORMAT!) • Conduct multifunctional team review for revision/consensus of the MCP. • Install MCP with change control approval. This will assign and display a document number, version number, issue date, and owner. • Implement the MCP. Update/revise manufacturing process instructions, control charts, gauge systems, etc. as required from the new control plan. • Benefits to developing and implementing CPs — improves overall quality by reducing chances of quality excursions. Reduces shrinkage or defects in MFG/transaction processes by keeping processes centered. Also, the data aids in timely troubleshooting of MFG/transaction processes, as well as a communication vehicle for changes to CTQ characteristics, control methods, etc. Quality system overview Control tools Continuous SPC tools The foundation of SPC Statistical process control Types of control charts — variable and attribute • Basic components of a control chart • Control limits • What are control limits? • What is meant by “in control” and “out of control?” • Link between control limits, hypotheses testing and specifications
SL316XCh11Frame Page 389 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
389
Variable control charts • Individual X vs. EWMA chart • X-bar and R charts • X-bar-s charts • Individual and moving range • EWMA chart • Control chart — interpretation • Control chart — nonnormal distribution Attribute control charts • P charts • np chart • C chart • U chart • Attribute chart interpretation Alternative methods of control • Precontrol • Zone control charting Process capability estimate Poka yoke — understand the use of poka yoke strategies in completing a black belt project. Know how to design and implement a poka yoke strategy. • What is poka yoke/error or mistake-proofing? • Mistake-proofing manufacturing processes • Mistake-proofing transactional processes • Types of mistake-proofing • Errors vs. defects • Types of human errors • “Red flag” conditions • Control/feedback logic • Guidelines for mistake-proofing • Mistake-proofing strategies • Advantages of mistake proofing Maintenance — a reliability function • Maintenance via Six Sigma is all-encompassing — transactional, information systems, production equipment, etc. Maintenance function should be linked to customer CTQs. It should address all six ms: machines, manpower, methods, materials, mother nature, and measurements. (Make sure you differentiate these from the classical nonmanufacturing items of policies, procedures, place, environment, measurement, and people.) Maintenance can and should be a reliability function, not just a repair function. • Maintenance maximizes output, minimizes cost, and assures continued operation - customer satisfaction. Maintenance — integrated strategy • World-class key performance indicators • Predictive maintenance
SL316XCh11Frame Page 390 Monday, September 30, 2002 8:08 PM
390
Six Sigma and Beyond: The Implementation Process
• Benefits to developing and implementing PMs • Major elements of preventive maintenance Realistic tolerancing — a simple graphical method for establishing optimum levels and appropriate tolerances for inputs. Once it is determined that a continuous output depends linearly on a continuous input, the output specification is used to create an input specification. Scatter plots and fitted line plots demonstrate association of inputs and outputs, not necessarily cause and effect. A realistic tolerancing method: Step 1: Identify the KPOV of interest note its specification. Choose KPIV. Step 2: Select the KPIV of interest. Define a range of values for the KPIV that will likely optimize the KPOV. Step 3: Run 30 samples over the range of the KPIV and record the output values. Step 4: Plot the results with the KPIV on the x-axis and the output on the y-axis. If the plot has a tilt with little vertical scatter, a relation exists. Proceed to Step 5. If there is no tilt, the KPIV has no relation to the response variable. Step 5: Determine the target value and tolerance of the KPIV. • Draw a best-fit line through the data. • Eliminate data point furthest from best-fit line. • Draw a parallel line through the next furthest point from the bestfit line. Draw a second parallel line equidistant from the best-fit line on the opposite side. The vertical distance between these two parallel lines represents 95% of the total effect of all other factors on the output other than the KPIV studied here. If specifications exist for the response variable, draw lines from those values on the yaxis to intersect the upper and lower confidence lines. • Drop two lines from these intersection points to the x-axis. The distance between where these intersect the x-axis represents the maximum tolerance permitted for the input variable. Step 6: Compare these values against the existing operating levels and implement necessary changes to SOP. Document changes via the FMEA and control plan. Gauge and measurement systems • Management plan • Long-term gauge control • Long-term gauge control is the management of the basis of our understanding of our process. Remember, the quality of our process cannot be understood and controlled without understanding the quality of our measurements. • Why do we need a long-term gauge plan? Long-term project control is dependent on measurement and analysis. The measurement system needs to be under control.
SL316XCh11Frame Page 391 Monday, September 30, 2002 8:08 PM
Six Sigma for Green Belts
391
• Who is responsible for the long term gauge plan? Those responsible for the process variables of interest. Gauge management incorporate it into the local quality system and ensure that future owners are trained to implement. • What is in a long-term gauge plan? 1. Initial baseline analysis 2. Ownership details 3. Calibration control (chart?) with instructions 4. Handling and storage requirements 5. Maintenance requirements — procedures and log 6. Spare parts requirements 7. ID/tracking system 8. Ongoing MSA requirements (product/product changes, gauge changes, operator changes, etc.) 9. Thorough documentation • What do you need to do to develop your long-term gauge plan? • Your gauge • What was your initial baseline (GR&R) data? Is this gauge still appropriate? • What is the amount of bias in your gauge? Linearity? How will you control this bias? • Who “owns” and maintains the gauge? • Who calibrates your gauge? How frequently? • Which gauge would you use? • What are the handling and storage requirements for the gauge? • Who needs to maintain the gauge? What does this mean? • How do you maintain the gauge? What are the spare parts requirements? • How frequently and when should MSA be performed? By whom? • Which one should you use and when? • What documentation is required for the long-term gauge plan? • How will we manage this documentation? • What issues/roadblocks do I see in developing the long-term gauge plan? • Implementing gauge plans
REFERENCE Adler, M. J. (1982). The great ideas: a syntopicon of great books of the western world. Encyclopaedia Britannica, Chicago, IL.
SL316XCh11Frame Page 392 Monday, September 30, 2002 8:08 PM
SL316XCh12Frame Page 393 Monday, September 30, 2002 8:07 PM
12
Six Sigma for General Orientation
The intent of the orientation overview is to take away the “mystical” aura of the Six Sigma methodology. It is geared toward individuals who are about to take further training in the Six Sigma methodology and, as such, serves as an overview as to what to expect. No prerequisites are needed; however, a willingness to learn and an open mind to “new” approaches and methodologies are expected. It is often suggested that simple exercises may be sprinkled throughout the course to make the key points more emphatic. Traditional exercises may be to define a process and improve on that process, to define five to ten operational definitions in that process, to work with some variable and attribute data, to calculate the DPO, and several others. It must be stressed that this 2-day training is an introduction; it attempts to explain the Six Sigma process on a very high level of understanding. As a consequence, the exercises given during the training are intended to motivate the participants and convince the participants that there is room in their organization for improvement and application of the Six Sigma methodology. Because organizations and their goals are quite different, we will provide the reader with a suggested outline of the training material for this orientation session. It should last 2 days, and the level of difficulty will depend on the participants. Detailed information may be drawn from the first six volumes of this series. A typical orientation program may want to focus on the following instructional objectives. The reader will notice that in some categories, there are no objectives. This is because for this stage of training the material may be overwhelming. Furthermore, transactional, technical, or manufacturing categories are absent. The reason for this is that as an overview the scope of the training is to give a sense of what Six Sigma is all about and to introduce the methodology through limited simulation. The simulations are designed to convince the participants that appropriate and applicable operational definitions and data will spur improvement in the decisionmaking process.
INSTRUCTIONAL OBJECTIVES — GENERAL RECOGNIZE Customer Focus • Provide a definition of the term customer satisfaction. • Understand the need–do interaction and how it relates to customer satisfaction and business success.
393
SL316XCh12Frame Page 394 Monday, September 30, 2002 8:07 PM
394
Six Sigma and Beyond: The Implementation Process
• Interpret the expression y = f(x). • Provide examples of the y and x terms in the expression y = f(x). Business Metrics • State at least three problems (or severe limitations) inherent in the current cost-of-quality (COQ) theory. • Define the nature of a performance metric. • Identify the driving need for performance metrics. • Explain the benefit of plotting performance metrics on a log scale. • Identify and define the principal categories associated with quality costs. • Compute the COQ given the necessary background data. Six Sigma Fundamentals • • • • • • • • • • • • • • • • • •
Recognize the need for change and the role of values in a business. Recognize the need for measurement and its role in business success. Identify the parts-per-million defect goal of Six Sigma. Recognize that defects arise from variation. Define the phases of breakthrough in quality improvement. Identify the values of a Six Sigma organization as compared to a four sigma business. Understand why inspection and test is nonvalue-added to a business and serves as a roadblock for achieving Six Sigma. Understand the difference between the terms process precision and process accuracy. Describe how every occurrence of a defect requires time to verify, analyze, repair, and reverify. Understand that work-in-process (WIP) is highly correlated to the rate of defects. Rationalize the statement: the highest-quality producer is the lowest-cost producer. Understand that global benchmarking has consistently revealed four sigma as average, while best-in-class is near the Six Sigma region. State the general findings that tend to characterize or profile a four sigma organization. Recognize the cycle-time, reliability, and cost implications when interpreting a sigma benchmarking chart. Provide a qualitative definition and graphical interpretation of the standard deviation. Draw first-order conclusions when given a global benchmarking chart. Understand the basic nature of statistical process control charts and the role they play during the control phase of breakthrough. Provide a brief history of Six Sigma and its evolution.
SL316XCh12Frame Page 395 Monday, September 30, 2002 8:07 PM
Six Sigma for General Orientation
395
• Understand the need for measuring those things that are critical to the customer, business, and process. • Define the various facets of Six Sigma and why Six Sigma is important to a business. • Provide a very general description of how a process capability study is conducted and interpreted. • Understand the difference between the idea of benchmark, baseline, and entitlement cycle time. • Understand the fundamental nature of quantitative benchmarking on a sigma scale of measure. • Recognize that the sigma scale of measure is at the opportunity level, not at the system level. • Interpret an array of sigma benchmarking charts. • Understand the driving need for breakthrough improvement vs. continual improvement. • Define the two primary components of process breakthrough. • Provide a brief description of the four phases of process breakthrough (i.e., measure, analyze, improve, control). • Explain how statistically designed experiments can be used to achieve the major aims of Six Sigma from quality, cost, and cycle-time points of view.
DEFINE Nature of Variables • Explain the nature of a leverage variable and its implications for customer satisfaction and business success. Opportunities for Defects • Provide a rational definition of a defect. CTX Tree • Define the term critical to satisfaction characteristic (CTS) and its importance to business success. • Define the term critical to quality characteristic (CTQ) and its importance to customer satisfaction. • Define the term critical to process characteristic (CTP) and its importance to product quality. Process Mapping • Construct a process map using standard mapping tools and symbols.
SL316XCh12Frame Page 396 Monday, September 30, 2002 8:07 PM
396
Six Sigma and Beyond: The Implementation Process
• Explain how process maps can be linked to the CT tree to identify problem areas. • Explain how process maps can be used to identify constraints and determine resource needs. Process Baselines Nothing specific. Six Sigma Projects • Interpret each of the action steps associated with the four phases of process breakthrough. Six Sigma Deployment • • • • •
Provide a brief description of the nature of a Six Sigma black belt (SSBB). Describe the role and responsibilities of a SSBB. Provide a brief description of the nature of a Six Sigma champion (SSC). Describe the roles and responsibilities of a SSC. Provide a brief description of the nature of a Six Sigma master black belt (SSMBB). • Describe the roles and responsibilities of a SSMBB.
MEASURE Scales of Measure Nothing specific. Data Collection Nothing specific. Measurement Error • Describe the role of measurement error studies during the measurement phase of breakthrough. Statistical Distributions • Construct and interpret a histogram for a given set of data. • Understand what a normal distribution and a typical normal histogram are and how they are used to estimate defect probability. • Construct a histogram for a set of normally distributed data and locate the data on a normal probability plot.
SL316XCh12Frame Page 397 Monday, September 30, 2002 8:07 PM
Six Sigma for General Orientation
397
Static Statistics • Provide a qualitative definition and graphical interpretation of the variance. • Compute the sample standard deviation given a set of data. • Provide a qualitative definition and graphical interpretation of the standard Z transform. • Compute the mean, standard deviation, and variance for a set of normally distributed data. Dynamic Statistics • Explain what phenomenon could account for a differential between the short-term and long-term standard deviations.
ANALYZE Six Sigma Statistics • Identify the key limitations of the performance metric final yield (i.e., output/input). • Identify the key limitations of the performance metric first-time yield (Y.ft). Process Metrics • Explain why a Z can be used to measure process capability and explain its relationship to indices such as Cp, Cpk, Pp, and Ppk. • Explain the difference between static mean offset and dynamic mean variation and how they impact process capability. Diagnostic Tools Nothing specific. Simulation Tools Nothing specific. Statistical Hypotheses Nothing specific. Continuous Decision Tools Nothing specific.
SL316XCh12Frame Page 398 Monday, September 30, 2002 8:07 PM
398
Six Sigma and Beyond: The Implementation Process
Discrete Decision Tools • List and describe the principal sections of a customer satisfaction survey and how they can be used to link the process to the customer.
IMPROVE Experiment Design Tools • Provide a general description of a statistically designed experiment and what such an experiment can be used for. • Recognize the principal barriers to effective experimentation and outline several tactics that can be employed to overcome such barriers. • Describe the two primary components of an experimental system and their related subelements. • Outline a general strategy for conducting a statistically designed experiment and the resources needed to support its execution and analysis. • State the major limitations associated with the one-factor-at-a-time approach to experimentation and offer a viable alternative. Robust Design Tools Nothing specific. Empirical Modeling Tools Nothing specific. Tolerance Tools Nothing specific. Risk Analysis Tools Nothing specific. DFSS Principles Nothing specific.
CONTROL Precontrol Tools • Develop a precontrol plan for a given CTQ and explain how such a plan can be implemented.
SL316XCh12Frame Page 399 Monday, September 30, 2002 8:07 PM
Six Sigma for General Orientation
399
Continuous SPC Tools • Explain what is meant by the term statistical process control and discuss how it differs from statistical process monitoring. • List the basic components of a control chart and provide a general description of the role of each component. Discrete SPC Tools Nothing specific.
OUTLINE
OF
CONTENT
Based on the above general objectives, it is recommended that the training should follow the following content format. By no means is this the only format. However, we believe that the content follows a hierarchical sequence, and because of this, we have attempted to accommodate the learning process. For detailed information, the reader is encouraged to see volumes one through six of this series. Introduction Agenda Ground rules Exploring our values Objectives Reason for adopting the Six Sigma methodology Background of Six Sigma How other companies have used it to their benefit The business case for your organization. (This is a very important section. It is where you make your case of whether or not the program of Six Sigma is worth your time and money. Make sure there is a convincing argument that this is not a fad but rather a way of doing business.)
PROCESS IMPROVEMENT Process design or redesign Process management Comparison of traditional quality with statistical quality and DFSS Three sigma vs. four sigma vs. Six Sigma Overview of the DMAIC model What makes a good Six Sigma project? Project selection Understanding the goal of Six Sigma How should Six Sigma be approached from the corporate level Structure of Six Sigma
SL316XCh12Frame Page 400 Monday, September 30, 2002 8:07 PM
400
Six Sigma and Beyond: The Implementation Process
Executive Champion Orientation (make sure you tell the participants that they are participating in this phase, which is the big picture of the Six Sigma methodology). Master black belts Black belts Green belts The model in some detail
DEFINE Team charter Customer focus Understanding needs, wants, and expectations Kano model Translating needs, wants into requirements Need for prioritizing critical to quality (CTQs) Process mapping Understanding the elements of the process Supplier Input Process itself (this is where the boundary helps in defining the focus of the project) Output Customer Understanding the difference between “what is” and “what should be” as opposed to “what could be” is a process. The focus for the improvement is always on the “what is.” Define the problem Selection criteria for project Deliverables
MEASURE Measurement Input measures Process measures
SL316XCh12Frame Page 401 Monday, September 30, 2002 8:07 PM
Six Sigma for General Orientation
401
Output measures Understand the measuring business process of Y = f(x) Understand the difference between effectiveness and efficiency Understand the internal quality indicators and their quantification
VARIATION Process variation Common Special Data collection Clarify data Collection goals Develop operational definitions and procedures Plan for data consistency and stability Begin data collection Continue improving measurement consistency Types of data Qualitative Quantitative (variable; attribute) Sampling Why sample? Sample determination Check sheets Frequency plot check sheets Process capability What is six sigma capability? Variation and process capability Sigma and the normal curve (sigma = standard deviation = the point of inflection on the curve) Simple Calculations and Conversions Calculate the DPU Calculate the DPMO First-pass performance Guidelines for determining how many opportunities per unit
SL316XCh12Frame Page 402 Monday, September 30, 2002 8:07 PM
402
Six Sigma and Beyond: The Implementation Process
ANALYZE Data Analysis Visual displays of data Histogram Box plots Run charts Stratification Pareto Process analysis Moments of truth Value added analysis Cycle time analysis Root cause analysis Cause-and-Effect Analysis Cause and effect diagram [problem (effect) is the Y; causes are the Xs] Root Causes Verification Scatter diagram Quantify opportunity Determine the Opportunity Understand the equation profit = revenue – cost Understand the difference between hard and soft money Improve Generate solutions Solutions criteria List possible solutions Select solutions Evaluate solutions and make best choice Validate solutions Cost-benefit analysis Implementation planning “Should be” process map (this is the result of your dissatisfied “what is”) Piloting Project planning Change management strategy Control
SL316XCh12Frame Page 403 Monday, September 30, 2002 8:07 PM
Six Sigma for General Orientation
Document and institutionalize Develop procedures Institutionalize systems and structures Continual improvement Monitor process Standards Control charts Measurement plan
403
SL316XCh12Frame Page 404 Monday, September 30, 2002 8:07 PM
SL316XCh13Frame Page 405 Monday, September 30, 2002 8:06 PM
Part III Training for the DCOV Model
SL316XCh13Frame Page 406 Monday, September 30, 2002 8:06 PM
SL316XCh13Frame Page 407 Monday, September 30, 2002 8:06 PM
13
DFSS Training
As we said in Volume 1 of this series, Six Sigma methodology is primarily a problemsolving methodology. It solves problems by the Six Sigma breakthrough strategy, which is the DMAIC model. If we accept this, we must also accept the fact that, just like any other methodology for solving a problem, Six Sigma is by definition an appraisal approach, for it tries to eliminate the nonconformance after it has occurred. Consequently, Six Sigma in the form of the DMAIC model is indeed an appraisal method utilizing a systematic approach to what has been a piecemeal intervention of many tools and methodologies over the last 75 years or so (probability concepts, DOE, FMEA, and so on). It is effective, to be sure. Certainly it is powerful. However, we are not convinced that it is the most effective way to resolve problems in any organization. Other tools can be just as effective if management is committed, resources are allocated, and a vision of true customer satisfaction that costs much less is forged. What we are convinced of regarding Six Sigma is its strength in applying the methodology to the design of a service, process, or product before it reaches the customer. This is where the money is. This is where it will pay off. This is where there is a true opportunity for improvement. Of course, at this point the Six Sigma methodology becomes a planning tool, for it tries to prevent the nonconformance from happening. The DFSS is the tool, the methodology, the vision, the metric, and the approach for truly improving the process, service, product, organizational productivity, and, most of all, customer satisfaction. Nothing else will do. Period! The general objectives of DFSS have been given in Chapters 7 through 10, under the categories of training (executives, champions, master black belts, and black belts). The reason for this is that we believe that by including them at that location we emphasize the knowledge base as it relates to the rest of the curriculum. The reader, of course, may want to extract them and combine each of the particular objectives into one. That is an acceptable approach. The actual training for DFSS follows the generic model of define, characterize, optimize, and verify (DCOV). In the define phase, we focus on understanding the customer. In the characterize phase, we focus on understanding the system. In the optimize phase, we focus on designing robust performance in a product, service, or process. And in the verify phase, we focus on testing and verification of the product, service, or process.
THE ACTUAL TRAINING FOR DFSS In DFSS, there are generally three levels of training: 407
SL316XCh13Frame Page 408 Monday, September 30, 2002 8:06 PM
408
Six Sigma and Beyond: The Implementation Process
1. Executive Training One-day training session that provides the project leader and managers a high-level overview of DFSS objectives and methods. Included is a discussion of the link between DMAIC and DCOV and the benefits of instituting Six Sigma in the design phase. 2. Champion Training A 2- or 4-day training course with greater depth of understanding of the DFSS process and tools. 3. Complete DFSS Training (Project Members and Black Belt) • Week 1: DFSS process, scorecard generation (organization depended), define (voice of the customer) and characterize (system design and functional mapping). • Week 2: review of week 1, questions and answers specific to DFSS content; project-related. Discussion of optimize (design for robustness, design for producibility) and verify. • Week 3 (optional or as needed): advanced parameter and tolerance design; structured inventive thinking). • Week 4 (optional or as needed): statistical tolerancing, FMEA, and multivariate analysis.
EXECUTIVE DFSS TRAINING Introductions Agenda Training ground rules • If you have any questions — please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities — please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Review the DMAIC model • Background and history of Six Sigma • Six Sigma scale • Philosophy • Significance of z scores • The move from three to four to five to Six Sigma DFSS business case — a very important issue to be discussed at length. Obviously each organization has its own situation; however, the following items are a good starting point for discussion • Current customer perception of performance • Future customer perception of performance
SL316XCh13Frame Page 409 Monday, September 30, 2002 8:06 PM
DFSS Training
409
• Current warranty cost • Future warranty cost • Current customer satisfaction performance • Future customer performance • Current competitive advantage • Future competitive advantage Link between DMAIC and DCOV • DMAIC improves customer satisfaction by eliminating nonconformances after they have occurred. It uses three primary ways to do it: 1. Statistical problem solving 2. Process variability reduction 3. Process capability assessment • DCOV improves customer satisfaction by preventing or avoiding nonconformances by improving the actual design relative to cost and sensitivity to noise over time. DFSS strategy • Target current and future products • Beta projects • Breakthrough systems • New product or service lines • High leverage of current customer issues • Depend on executive leadership • Educate management first • Engineering executive leads • Specific process participation • Be compatible with Six Sigma • Y = f(x); Y = f(x,n) • Six Sigma infrastructure (DMAIC foundation or something similar i.e., TQM, QOS, etc.) • Build on processes that work • Organizational timing requirements • APQP • Robustness DFSS deployment strategy • Train executives (0 to 3 months) • Conduct beta projects to establish training logistics and communicate the DFSS methods to interested teams (2 to 4 months) • Expand DFSS to other teams (4 to 6 months) • Apply DFSS to all new products (6 to 12 months) • Apply DFSS to the entire organization (12 to 18 months) • DFSS starts with educating the management team with a 4- to 8hour review covering the following: • Understand the process • Select projects for DFSS use • Identify the participants to receive the comprehensive training • Identify an executive to champion each project
SL316XCh13Frame Page 410 Monday, September 30, 2002 8:06 PM
410
Six Sigma and Beyond: The Implementation Process
Roles •
•
•
•
• The project team receives • An overview of the process • Training in scorecard preparation • Tool training appropriate for the team’s current phase in the product development process and responsibilities The champion role • Manages the process through the team. • Ensures that the team has all the resources necessary in time to follow the process. • Ensures the review process of the scorecards is incorporated within the overall executive review process for the team. • Requests that training as the team approaches a new phase within the process requiring further training. (Typical additional training is in the areas of statistical tolerancing, FEA, FMEA, parameter and tolerance design, etc.) Executive role • Determine business goals, in terms of cost of poor quality, within particular areas of concern. • Work with the deployment director to select the few significant issues on which to use DFSS for both current issues and future programs. • Declare that you are the champion for these issues and will use DFSS to resolve them. • Schedule regular status reviews with the effected teams. • Assure sufficient resources are available to the team to create success. • Meetings should become integrated with routine business reviews. • Request additional training as needed. Deployment director • Train executive teams in the one-day DFSS overview. • Coordinate selection of DFSS projects for assigned project teams. • Work with DFSS teams within assigned projects to deliver training to effected executives, managers, black belts, and DFSS engineers. • Attend regular status reviews with the effected teams. • Coordinate for the delivery of additional training as required. • Be a resource for resolving Six Sigma and DFSS issues. Technical manager • Oversee and coordinate the work of the DFSS process management teams (PMTs) to deliver the outcomes required at each milestone of the project. • Work with DFSS black belt to define the work required by each PMT member to generate appropriate scorecard and milestone deliverables.
SL316XCh13Frame Page 411 Monday, September 30, 2002 8:06 PM
DFSS Training
411
• Provide the appropriate resources and facilities needed to meet the DFSS and milestone deliverables. • Act as the project champion to the black belt to assure that the black belt’s assignments are appropriate to his skills and the use of DMAIC. • Black Belt • Work with technical manager to define the work required by each PMT member to generate the appropriate scorecard and milestone deliverables. • Resolve issues associated with the DFSS project that are best solved using DMAIC. • Act as a DMAIC resource to the team to include teaching concepts to the team as needed. Project member • Generate the outcomes normally associated with a project member. • Use DFSS methods to understand the underlying transfer function associated with the targeted system. • Generate the scorecard to predict the quality level of the targeted system at the appropriate milestone. This will entail gathering data regarding the product design geometry as well as manufacturing and assembly process capability. DFSS is a methodology that identifies explicitly the relationship between design and service, product or process. Its intent is to satisfy the customer by either enhancing current designs or completely redesigning the current design. What is new with DFSS Scorecard — perhaps the most important item in the entire DFSS methodology. Key QOS deliverable quantification — prioritization and use of appropriate and applicable data. Transfer function — function that characterizes critical-to-satisfaction metrics in terms of design parameters. The focus is on robustness, i.e., y = f(x,n), where y is the customer’s functionality, x is the requirement for that y, and n is the noise that x has to operate under so that y is delivered. Key characteristics of DFSS • Data driven • Provides leverage to existing tools with an organization, e.g. QOS, warranty, APQP, organizational verification system, organizational reliability program(s) and so on. • Provides a template for applying statistical engineering, including simulation studies. • Delivers quality to the product by focusing on the subsystem and moving into systems.
SL316XCh13Frame Page 412 Monday, September 30, 2002 8:06 PM
412
Six Sigma and Beyond: The Implementation Process
• Provides a vehicle of understanding of the y = f(x,n) for components to subsystems to systems. • Forces the use of systems engineering in the design process in conjunction with APQP timing requirements. The scorecard — there are many ways to track the progress of the project. The team should develop its own so that the following information may be captured: • CTCs • Transfer function that delivers the CTS attribute • Transfer function quantified in such a way that predicts the quality of delivery of the attribute • Appropriate and applicable information about the project so that related business decisions may be made A transfer function is the mathematical equation that allows you to design a quantitative relationship between dependent and independent variables. Transfer functions can be derived from two sources: 1. First principles • Known equations that describe functions (they may be identified from physics, including function structure flows) • Analytic models, simulation models (finite element analysis, Monte Carlo, etc.) • Drawings of systems and subsystems (evaluation of tolerancing, geometry of design, and mass considerations) • Design of experiments (classical design, Taguchi, response surface methodology, and multivariate analysis) 2. Empirical data • Correlation • Regression • Mathematical modeling Uses of transfer function — recognizing that not all variables should be included in a transfer function, we identify and focus only the critical few xs that are the most critical for achieving y. We then use the transfer function primarily to: • Estimate the mean and variance of Ys and ys • Cascade the customer requirement • Optimize and make trade-offs • Forecast customer experience • Set tolerances for significant xs • When a transfer function is not exactly known there are two options 1. Use surrogate data from similar designs. 2. Build a bundle of transfer function. The rationale for such an idea is the fact that we know that we will never have 100% of all customer
SL316XCh13Frame Page 413 Monday, September 30, 2002 8:06 PM
DFSS Training
413
metrics in terms of transfer function, just like we will never be able to design something with 100% reliability (remember: R(t) = 1 – F(t)). To be sure, we already may know some of the transfer functions; however, if we are in doubt, we may combine subsystems so that the outcome of a system may be represented with the best option of a transfer function. Overview of DFSS • Define CTSs • In this phase we • Identify CTS drivers and Y • Establish operating window for each chosen Y for new and aged conditions • Inputs • Consumer insights • Quality and customer satisfaction history • Mining data analysis • Functional, serviceability, corporate, and regulatory requirements • Integration targets • Brand profiler • Quality function deployment (QFD) • Conjoint analysis • TRIZ results • Design specifications • Business strategy • Competitive environment • Technology assessment • Market segmentation • Benchmarking • Required technical activity • Select Ys • Define customer and product needs/requirements • Relate needs/requirements to customer satisfaction; benchmark • Prioritize needs/requirements to determine CTS Ys • Peer review • Outputs • Kano analysis • CTS scorecards • Y relationship to customer satisfaction • Benchmarked CTSs • Targets and ranges for CTS Ys • Characterize the system • In this phase we • Flow CTS Ys down to lower level ys, e.g., Y to y to y1, y2… yn • Relate ys to CTQ parameters (xs and ns), y = f(x1, … xk, n1… nj) • Characterize robustness opportunities
SL316XCh13Frame Page 414 Monday, September 30, 2002 8:06 PM
414
Six Sigma and Beyond: The Implementation Process
• Inputs • Kano diagram • CTS Ys, with targets and ranges • Customer satisfaction scorecard • Functional boundaries • Interfaces from VDS/SDS • Existing hardware FMEA data and so on • Required technical activity • Identify functions associated with CTSs • Decompose Y into contributing elements and identify xs and ns • Create function structure or other model for identified functions • Select ys that measure the intended function • Identify control and noise factors • Create general or explicit transfer function • Peer review • Outputs • Function diagrams • Mapping of Y to functions to critical functions to ys • P diagram, including critical • Technical metrics, ys • Control factors, xs • Noise factors, ns • Transfer function • Scorecard with target and range for ys and xs • Plan for • Optimization • Verification (robustness and reliability checklist) • Optimize product/process • In this step we: • Understand capability and stability of present processes • Understand the high time-in-service robustness of the present product • Minimize product sensitivity to noise, as required • Minimize process sensitivity to product and manufacturing variations, as required • Inputs • Present process capability (µ target and σ) • P diagram, with critical ys, xs, ns • Transfer function (as developed and understood to date) • Manufacturing and assembly process flow diagrams, maps • Gage R&R capability studies • PFMEA and DFMEA data • Optimization plans, including noise management strategy • Verification plans: robustness and reliability checklist • Required technical activity • Optimize product and process
SL316XCh13Frame Page 415 Monday, September 30, 2002 8:06 PM
DFSS Training
• • •
415
Minimize variability in y by selecting optimal nominals for xs Optimize process to achieve appropriate σx Ensure ease of assembly and manufacturability (in both steps above) • Eliminate specific failure modes • Update control plan • Peer review • Outputs • Transfer function • Scorecard with estimate of σy • Target nominal values identified for xs • Variability metric for CTS Y or related function, e.g., range, standard deviation, S/N ratio improvement • Tolerances specified for important characteristics • Short-term capability, “z” score • Long-term capability • Updated verification plans (robustness and reliability checklist) • Updated control plan • Verify results • In this step we • Assess actual performance, reliability, and manufacturing capability • Demonstrate customer-correlated performance over product life • Inputs • Updated verification plans (robustness and reliability checklist) • Scorecard with predicted values of y, σy based upon µ x and σx • Historical design verification and reliability results • Control plan • Required technical activity • Enhance tests with key noise factors • Conduct physical and analytical performance and key life tests • Improve ability of tests to discriminate good/bad commodities • Apply test strategy to maximize resource efficiency • Peer review • Outputs • Test results (product performance over time i.e., Weibull, hazard plot, etc.) • Long term process capability estimates • Scorecard with values of y, σy based on test data • Completed robustness and reliability checklist with demonstration matrix • Lessons learned captured in system or component design specification and so on
SL316XCh13Frame Page 416 Monday, September 30, 2002 8:06 PM
416
Six Sigma and Beyond: The Implementation Process
DFSS CHAMPION TRAINING Every champion should be trained in the DFSS model of the Six Sigma methodology. Again, the intent is not to make them experts, but to familiarize them enough with the process, to ask questions, and make sound decisions. The following outlines are designed for a 2- and 4-day training program. The reader will notice that we make no distinction between categories here. The reason is that this outline is generic enough to accommodate all three. Obviously, it can and should be modified for specific situations.
DFSS – 2-DAY PROGRAM Generally, this is used for the transactional champion. Introduction Review the DMAIC model Have a familiarity with the DFSS methodology Understand how Six Sigma integrates with current design practices Understand how to select CTQs Understand the importance of using data and the Six Sigma methodology vs. alternate approaches Understand tolerancing and its importance to Six Sigma Gain exposure to the tools and resources available to assist Six Sigma design efforts; remember to emphasize that project champions are responsible for familiarity and understanding, not expertise Design for Six Sigma — a systematic methodology, with tools, training and measurements which enable us to design products/processes that meet customer expectations and can be produced at the Six Sigma level. Developing a robust design Design for Six Sigma process Six Sigma design process Identify (measure) phase CTQ identification The QFD process CTQ identification — QFD 2-step process FMEA process — relationship to QFD FMEA is used in all design phases Using FMEA: assessing current situation Characterize • Review ideal function • Identify and define the CTQs Optimize (improve) Understanding process data Tolerance analysis • Six Sigma mechanical tolerancing
SL316XCh13Frame Page 417 Monday, September 30, 2002 8:06 PM
DFSS Training
417
Verify design (control) Test and controls Benefits
DFSS CHAMPION TRAINING OUTLINE — 4 DAYS Generally, this is used for the technical or manufacturing champion. The champion needs to be familiar with NOT an expert in DFSS. Overview of customer focus and business strategy Understand the need for accurate CTQs Source of most quality problems is design The five sigma wall and the jump to the Six Sigma opportunity DFSS definition: a systematic methodology, with tools, training, and measurements that enable the organization to design products and processes that meet or exceed customer expectations as well as contribute to the profitability of the organization and that can be produced at the Six Sigma level. Differentiate between traditional and robust design — explain the difference between y = f(x) and y = f(x,n). CTQs identification Process capability Tolerance analysis Design for Six Sigma process — we want to Understand the process standard deviation. Control the variation of the process — make it stable. Determine the Six Sigma tolerances. Correlate and confirm (for agreement) that customer expectations are met under real-world usage. (Quite often, in a DFSS study we find that the present capability is not meeting our goals and does not always conform to customer needs. As a consequence, the DFSS becomes an interactive process between designers, manufacturing, and customer needs.) DFSS model Identify Customer requirements Technical requirements (both variables and limits) Characterize (design) Formulate concept design Identify potential risks For each CTQ, identify design parameters and noise Find critical design parameters and noise factors and their influence on CTQ Develop a preliminary transfer function
SL316XCh13Frame Page 418 Monday, September 30, 2002 8:06 PM
418
Six Sigma and Beyond: The Implementation Process
Optimize Do a trade-off analysis between your parameters and noise factors and customer requirements to make sure that all CTQs are met. Assess parameter capability to achieve critical design parameters and noise factors and meet CTQ limits. Optimize design to minimize sensitivity of CTQs to process parameters. Determine tolerance. Estimate capability of new design (via simulation) and costs. Verify (validate) Test and validation. Assess performance, failure modes, reliability, and risks. If the design for verification is okay, then proceed to process. DFSS tools Identify Kano model QFD FMEA Benchmarking Competitive analysis Target costing Organizational requirements Customer input (marketing surveys, focus groups, warranty, etc.) Characterize (design) Risk assessment Gauge R & R Simulation (finite element analysis, Monte Carlo, solver, etc.) Process mapping Ideal function DOE — parameter design Reliability tools as appropriate Optimize Process capability models Robust design Simulation Tolerancing Traditional DMAIC tools Verify (validate) Accelerated testing Reliability analysis as appropriate FMEA Cost–benefit analysis Overview of selected tools: the idea here is to emphasize maximization of customer satisfaction and organizational profitability by minimizing variation. The tools used in DFSS are intended to do that. That is, in the identify phase, the focus
SL316XCh13Frame Page 419 Monday, September 30, 2002 8:06 PM
DFSS Training
419
is to tie the design as much as possible to the voice of the customer. Therefore, the tools used are to identify what is important to the customer but also to prioritize them. Identify (measure) phase CTQ identification: the select few, measurable key characteristics of a specific part, process, or specification that must be in statistical control to guarantee customer satisfaction. Therefore, they must be measured and analyzed on an ongoing basis. The factors to consider are: a) relationship to design and customer need and b) technical risk related to meeting the specifications. Kano model Identifies the basic performance and excitement characteristics Forces the issue of understanding what requirements are essential QFD process • Identifies CTQs that are the source of customer satisfaction • Focuses on satisfaction • Translates customer requirements to part CTQ characteristics by: Step 1. Collect and organize customer requirements. Work with marketing and other organizations (internal and external to the organization). Simulate the needs, wants, and expectations of the customer as though you were the customer. Storyboard customer requirements and organize hierarchically. Rate importance of each specific customer requirement on a Likert scale (1–5). Step 2. Collect and organize technical requirements. Research existing specifications, engineering procedures, and validation plans. Develop a list of specific “measurable” design requirements. Step 3. Map relationships between customer and technical requirements. Focus on vital few and trivial many. Use scale of strong, moderate and weak (9, 3, 1 or 0, respectively). A strong relationship is in direct correlation with customer satisfaction. Definite service call or nonpurchase issue. A moderate relationship may result in a service call or nonpurchase issue. A weak relationship has a very small chance of a service call or nonpurchase issue. Multiply relationship weighting by customer importance and sum columns. At this point the translation of technical requirements into CTQs takes place. (Warning: CTQs are great! However, we must be very careful in their identification because quite often not all of them are driven by customer satisfaction.) FMEA is used in all product steps. It assesses the current situation and may identify potential problems in both design and process. For a detailed explanation, see Volume 6 of this series. In a cursory review the FMEA focuses on: Product planning — design FMEA (product FMEA) Product goal setting Performance
SL316XCh13Frame Page 420 Monday, September 30, 2002 8:06 PM
420
Six Sigma and Beyond: The Implementation Process
Reliability Cost Life Product design — design FMEA (product FMEA) Optimization Analyze preliminary transfer function Process design — process FMEA Process sequencing Function flow Production quality planning — process FMEA Quality plans Manufacturing Suppliers Services — service FMEA Field service goal setting Maintainability Serviceability Spare part availability Design (analyze) phase: the focus here is a) to ensure that the appropriate and applicable CTQs are identified and emphasized, b) to ensure appropriate transfer functions of CTQs have been translated into the technical requirements, c) verify use of design simplification tools to reduce complexity, and d) to organize testing using design of experiments. Formulate concept design — use simple design methodology. Review ideal function — P diagram. Introduce preliminary transfer function — y = f(x,n). Identify potential risks — use assessment risk and FMEA. For each technical requirement identify design parameters and noise (CTQs) — use engineering analysis and simulation. Determine the CTQs and their influence on the technical requirements (transfer functions) — use systems engineering, DOE, and appropriate and applicable analysis tools. Demonstrate how the flow of transfer function relates CTQs to the technical requirements. (Special note: it is very important for champions to understand this. Otherwise, the project will likely fail. If the wrong CTQs are identified to create the transfer functions, then the transfer function is not accurately identified and there is no true understanding of the customer need, want, etc.) Optimize (improve) phase: the focus here is to understand and use process capability and statistical tolerancing and recognize processes that do not meet the Six Sigma requirements. Assess process capability to achieve critical design parameters and noise to meet CTQ limits — use data analysis.
SL316XCh13Frame Page 421 Monday, September 30, 2002 8:06 PM
DFSS Training
421
Optimize design to minimize sensitivity of CTQs to process parameters given a noise — use appropriate and applicable data bases, process capability models, and process flow charts. Conduct mistake proofing — use warning labels, devices, feedback controls. Determine tolerances — use statistical tolerancing, robust designs, simulation. Perform trade-off analysis to ensure all CTQs are met — use trade-off analysis. Estimate DFSS success and cost — use appropriate and applicable Six Sigma tools. Understanding process data – three categories of understanding Doing it well implies that capability is done and is acceptable. Doing it better implies understanding of current data. Doing it right implies Six Sigma methodology. Understanding variation, short-term and long-term. Understanding rational subgrouping. Understanding the short-term design goal is a Z value of 6.0. Understand the long- vs. short-term capability of 4.5 vs. Six Sigma. Provide examples of long and short capability. (Remember, for Six Sigma the capability is 6 × short-term sigma.) Tolerance analysis. Make it work — parameter design Engineering calculations Finite element analysis DOE Test and evaluation Make it fit — tolerance analysis Process capability studies Minimum maximum stack up Root sum of squares Producibility studies Control (validate) phase: the focus here is to make sure the design meets customer’s requirements by increased testing using formal tests and feedback of requirements to manufacturing and sourcing.
PROJECT MEMBER AND BB DFSS TRAINING The most intensive DFSS training is reserved for the project member and black belt. Within this training, however, there are typically two alternatives for the actual delivery of the DFSS methodology. The first is to do it on an as-needed basis, which implies that the training may last as long as the project does, and the second is to offer the entire training as a block. In the first case, the material is presented and a facilitation of the project follows. The actual process is divided into the components of the DCOV model and generally lasts about 1 week per element. The second option is to do it in a 2-week blocks with about a 1-month break in between to facilitate implementation of the concepts learned. In week 1, the define and characterize phases are presented, and after the 1-month lapse the second week is spent
SL316XCh13Frame Page 422 Monday, September 30, 2002 8:06 PM
422
Six Sigma and Beyond: The Implementation Process
discussing the optimize and verification phases. The training material for both options are exactly the same. The only difference is that with the first option the specificity of the design project dictates the pace and application of the tools.
WEEK 1 Introductions Agenda Training ground rules: • If you have any questions, please ask! • Share your experiences. • When we take frequent short breaks, please be prompt in returning so we can stay on schedule. • There will be a number of team activities; please take an active role. • Please use name tents. • Listen as an ally. • The goal is to complete your projects! Exploring our values Review the DMAIC model • Background and history of Six Sigma • Six Sigma scale • Philosophy • Significance of z scores • The move from three to four to five to Six Sigma DFSS business case is a very important issue and should be discussed at length. Obviously, each organization has its own situation; however, the following items are a good starting point for discussion: • Current customer perception of performance • Future customer perception of performance • Current warranty cost • Future warranty cost • Current customer satisfaction performance • Future customer performance • Current competitive advantage • Future competitive advantage Link between DMAIC and DCOV • DMAIC improves customer satisfaction by eliminating nonconformances after they have occurred. It does this in one of three primary ways: 1. Statistical problem solving 2. Process variability reduction 3. Process capability assessment DCOV improves customer satisfaction by preventing or avoiding nonconformances by improving the actual design relative to cost and sensitivity to noise over time.
SL316XCh13Frame Page 423 Monday, September 30, 2002 8:06 PM
DFSS Training
423
DFSS strategy • Target current and future products • Beta projects • Breakthrough systems • New product or service lines • High leverage of current customer issues • Depend on executive leadership • Educate management first • Engineering executive leads • Specific process participation • Be compatible with Six Sigma • Y = f(x); Y = f(x,n) • Six Sigma infrastructure (DMAIC foundation or something similar i.e., TQM, QOS, etc.) • Build on processes that work • Organizational timing requirements • APQP • Robustness DFSS deployment strategy • Train executives (0 to 3 months) • Conduct beta projects to test training logistics and communicate the DFSS methods to interested teams (2 to 4 months) • Expand DFSS to other teams (4 to 6 months) • Apply DFSS to all new products (6 to 12 months) • Apply DFSS to the entire organization (12 to 18 months) • DFSS starts with educating the management team with a 4- to 8-hour review covering the following: • Understand the process • Select projects for DFSS use • Identify the participants to receive comprehensive training • Identify an executive to champion each project • The project team receives: • An overview of the process • Training in scorecard preparation • Tool training appropriate for the team’s current phase in the product-development process Roles and responsibilities • The champion role • Manages the process through the team. • Ensures that the team has all the resources necessary in time to follow the process. • Ensures the review process of the scorecards is incorporated within the overall executive review process for the team. • As the team approaches a new phase within the process requiring further training, the executive requests that training. (Typical additional
SL316XCh13Frame Page 424 Monday, September 30, 2002 8:06 PM
424
Six Sigma and Beyond: The Implementation Process
training is in the areas of statistical tolerancing, FEA, FMEA, parameter and tolerance design, etc.) Executive role • Determines business goals, in terms of cost of poor quality, within particular areas of concern. • Works with the deployment director to select the few significant issues to use DFSS for both current issues and future programs. • Declares that you are the champion for these issues and will use DFSS to resolve them. • Schedules regular status reviews with the effected teams. • Ensures sufficient resources are available to the team to create success. • Integrates meetings with routine business reviews. • Requests additional training as needed. Deployment director • Trains executive teams in the 1-day DFSS overview. • Coordinates selection of DFSS projects assigned to project teams. • Works with DFSS teams within assigned projects to deliver training to effected executives, managers, black belts and DFSS engineers. • Attends regular status reviews with the affected teams. • Coordinates for the delivery of additional training as required. • Is a resource for resolving Six Sigma and DFSS issues. Technical manager • Oversees and coordinates the work of the DFSS process management teams (PMT) to deliver the outcomes required at each milestone of the project. • Works with DFSS black belt to define the work required by each PMT member to generate appropriate scorecard and milestone deliverables. • Provides the appropriate resources and facilities needed to meet the DFSS and milestone deliverables. • Acts as the project champion to the black belt to assure that black belt’s assignments are appropriate to his skills and the use of DMAIC. Black belt • Works with technical manager to define the work required by each PMT member to generate the appropriate scorecard and milestone deliverables. • Resolves issues associated with the DFSS project that are best solved using DMAIC. • Acts as a DMAIC resource to the team to include teaching concepts to the team as needed. Project member • Generates the outcomes normally associated with a project member. • Uses the DFSS methods to understand the underlying transfer function associated with the targeted system.
SL316XCh13Frame Page 425 Monday, September 30, 2002 8:06 PM
DFSS Training
425
• Generates the scorecard to predict the quality level of the targeted system at the appropriate milestone. This will entail gathering data regarding the product design geometry as well as manufacturing and assembly process capability. DFSS is a methodology that identifies explicitly the relationship between design and service, product or process. Its intent is to satisfy the customer by either enhancing current designs or completely redesigning the current design. What is new with DFSS • Scorecard — perhaps the most important item in the entire DFSS methodology. • Key QOS deliverable quantification — prioritization and usage of appropriate and applicable data. • Transfer function — function that characterizes critical-to-satisfaction metrics in terms of design parameters. The focus is on robustness, i.e., y = f(x,n), where y is the customer’s functionality, x is the requirement for that y, and n is the noise that that x has to operate under so that y is delivered. Key characteristics of DFSS • Data-driven. • Provides leverage to existing tools within an organization, e.g., QOS, warranty, APQP, organizational verification system, organizational reliability programs, etc. • Provides a template for applying statistical engineering including simulation studies. • Delivers quality to the product by focusing on the subsystem and moving into systems. • Provides a vehicle of understanding of the y = f(x,n) for components to subsystems to systems. • Forces the use of systems engineering in the design process in conjunction with APQP timing requirements. The scorecard: there are many ways to track the progress of the project. The team should develop its own, so that the following information may be captured: CTCs Transfer function that delivers the CTS attribute Transfer function quantified in such a way that it predicts the quality of delivery of the attribute Appropriate and applicable information about the project on the basis of which related business decisions may be made A transfer function is the mathematical equation that allows you to design a quantitative relationship between dependent and independent variables. Transfer functions can be derived from two sources:
SL316XCh13Frame Page 426 Monday, September 30, 2002 8:06 PM
426
Six Sigma and Beyond: The Implementation Process
1. First principles • Known equations that describe functions (they may be identified from physics, including function structure flows) • Analytic models, simulation models (finite element analysis, Monte Carlo, etc.) • Drawings of systems and subsystems (evaluation of tolerancing, geometry of design, and mass considerations) • Design of experiments (classical design, Taguchi, response surface methodology and multivariate analysis) 2. Empirical data • Correlation • Regression • Mathematical modeling • Uses of transfer function — recognizing that not all variables should be included in a transfer function, we identify and focus only the critical few xs that are the most critical for achieving y. We then use the transfer function primarily to: • Estimate the mean and variance of Ys and ys • Cascade the customer requirement • Optimize and make trade-offs • Forecast customer experience • Set tolerances for significant xs • When a transfer function is not exactly known there are two options 1. Use surrogate data from similar designs. 2. Build a bundle of transfer function; the rationale for such an idea is the fact that we know that we will never have 100% of all customer metrics in terms of transfer function, just as we will never be able to design something with 100% reliability [recall: R(t) = 1 – F(t)]. To be sure, we already may know some of the transfer functions, however, if we are in doubt, we may combine subsystems so that the outcome of a system may be represented with the best option of a transfer function. Overview of DFSS Define CTSs In this phase we: Identify CTS drivers and Y Establish operating window for each chosen Y for new and aged conditions • Inputs • Consumer insights • Quality and customer satisfaction history • Mining data analysis • Functionality, serviceability, and corporate and regulatory requirements • Integration targets • Brand profiler
SL316XCh13Frame Page 427 Monday, September 30, 2002 8:06 PM
DFSS Training
427
• Quality function deployment (QFD) • Conjoint analysis • TRIZ results • Design specifications • Business strategy • Competitive environment • Technology assessment • Market segmentation • Benchmarking • Required technical activity • Select Ys • Define customer and product needs/requirements • Relate needs/requirements to customer satisfaction; benchmark • Prioritize needs/requirements to determine CTS Ys • Peer review • Outputs • Kano analysis • CTS scorecards • Y relationship to customer satisfaction • Benchmarked CTSs • Targets and ranges for CTS Ys • Characterize the system • In this phase we: • Flow CTS Ys down to lower level ys, e.g., Y to y to y1, y2… yn • Relate ys to CTQ parameters (xs and ns), y = f(x1, … xk, n1… nj) • Characterize robustness opportunities • Inputs • Kano Diagram • CTS Ys, with targets and ranges • Customer satisfaction scorecard • Functional boundaries • Interfaces from VDS/SDS • Existing hardware FMEA data, etc. • Required technical activity • Identify functions associated with CTSs • Deconstruct Y into contributing elements and identify xs and ns • Create function structure or other model for identified functions • Select ys that measure the intended function • Identify control and noise factors • Create general or explicit transfer function • Peer review • Outputs • Function diagram • Mapping of Y to critical functions, ys • P diagram, including critical • Technical metrics, ys
SL316XCh13Frame Page 428 Monday, September 30, 2002 8:06 PM
428
Six Sigma and Beyond: The Implementation Process
• Control factors, xs • Noise factors, ns • Transfer function • Scorecard with target and range for ys and xs • Plan for • Optimization • Verification (robustness and reliability checklist) • Optimize product/process • In this step we • Understand capability and stability of present processes • Understand the high time-in-service robustness of the present product • Minimize product sensitivity to noise, as required • Minimize process sensitivity to product and manufacturing variations, as required • Inputs • Present process capability (µ target and σ) • P diagram, with critical ys, xs, ns • Transfer function (as developed and understood to date) • Manufacturing and assembly process flow diagrams, maps • Gauge R&R capability studies • PFMEA and DFMEA data • Optimization plans, including noise management strategy • Verification plans: robustness and reliability checklist • Required technical activity • Optimize product and process • Minimize variability in y by selecting optimal nominals for xs • Optimize process to achieve appropriate σx • Ensure ease of assembly and manufacturability (in both steps above) • Eliminate specific failure modes • Update control plan • Peer review • Outputs • Transfer function • Scorecard with estimate of σy • Target nominal values identified for xs • Variability metric for CTS Y or related function, e.g., range, standard deviation, S/N ratio improvement • Tolerances specified for important characteristics • Short-term capability, “z” score • Long-term capability • Updated verification plans (robustness and reliability checklist) • Updated control plan • Verify results • In this step we
SL316XCh13Frame Page 429 Monday, September 30, 2002 8:06 PM
DFSS Training
429
•
Assess actual performance, reliability, and manufacturing capability • Demonstrate customer-correlated performance over product life • Inputs • Updated verification plans (robustness and reliability checklist) • Scorecard with predicted values of y, σy based upon µ x and σx • Historical design verification and reliability results • Control plan • Required technical activity • Enhance tests with key noise factors • Conduct physical and analytical performance and key life tests • Improve ability of tests to discriminate good/bad commodities • Apply test strategy to maximize resource efficiency • Peer review • Outputs • Test results (product performance over time i.e., Weibull, hazard plot, and so on) • Long-term process capability estimates • Scorecard with values of y, σy based on test data • Completed robustness and reliability checklist with demonstration matrix • Lessons learned captured in system or component design specification, etc.
DCOV MODEL
IN
DETAIL
The Define Phase To begin the process of DFSS the engineer or designer must understand the customer. In fact, the engineer must also understand the customer’s “drive” or “insight,” as some call it. This is because a customer focuses on functionality of the product or service and her judgment is based on emotional or rational responses. It turns out that the judgment is communicated as an intent, and the actual purchase is the result. The engineer or designer, on the other hand, focuses on requirements, which are the translation of functionality into engineering specifications. This is the function of the define phase. In fact, the translation turns out to be an iterative process using several tools and methodologies to come up with the now famous coding of the requirements as Y→y→x→x1 and so on. Cascading is an important yet time-consuming process. But however systematic and thorough it may be, the fact remains that no steps are done with 100% accuracy; the cascading is not always a one-to-one relationship, and indeed the customer’s information may not be available. By the same token, given limited and less-thanperfect information as engineers we must optimize the information. We do that by demanding correctness on critical factors, focusing only on critical factor transference, and designing our products, services, or processes to Six Sigma requirements based only on those factors. How do we do that? By collecting the appropriate and
SL316XCh13Frame Page 430 Monday, September 30, 2002 8:06 PM
430
Six Sigma and Beyond: The Implementation Process
applicable customer information, i.e., demographics, lifestyles, usage habits, and product or service preference and by understanding the transfer function. Tools to consider • Customer understanding • Interviews — asking provocative questions • Observation — watching and recording behaviors of what the customer is doing in daily life • Immersion — stepping into another person’s life • Introspection — imagining yourself in the role of the consumer • Market research • Conjoint analysis • Discriminant analysis • Multivariate analysis • Warranty data • Library, web, professional sources • Focus groups • Mind map — a way of capturing customer environment or creating an image of the product’s or service’s use • Start with several ideas connected to one central function • Each of these ideas can be connected to other ideas of their own • Activity diagrams are based on the user’s environment. They are usually constructed as a process flow diagram, except the arrows represent order of activities, and the boxes represent user activity. (Activities may be parallel or sequential.) Activity diagrams help in understanding the Ys. • Show life cycle • Represent activities that occur as the customer interacts with the product or service • Kano Model • Basic quality • Performance quality • Excitement quality • Quality over time • Organizational knowledge • Warranty data • Surrogate data • Data mining • QFD is a planning tool that incorporates the voice of the customer into features that satisfy the customer. QFD is an excellent framework to organize Ys, ys, and xs. However, it does not generate them. The idea of QFD is to • Capture relationships between customer wants and design variables • Deploy the design in such a way that the customer is satisfied • Make sure that the key relationships are more rigorous through the establishment of transfer functions
SL316XCh13Frame Page 431 Monday, September 30, 2002 8:06 PM
DFSS Training
431
Formulating CTSs: Use measurable CTSs by focusing on the customer’s environment, emotions, and activities to do the first draft of items that are critical to satisfaction. Make sure that you as an engineer or designer understand that criticality is a relative term. Criticality is an issue of measurement. To be effective one must understand the theory of mathematical comparisons (measurement theory). Effective critical items are those items that are measured in ratio scale. • Ordinal scales • Interval scales • Numerical scales — binary scale (0 and 1) • Ratio scales • Deliverables/checklist of define phase — have you considered the following? • Form a cross-functional team • Determine project scope • Understand customer needs • Identify corporate and regulatory requirements • Consider product strategy and priorities • Analyze quality history • Develop a Kano model • Identify CTS Ys or surrogates as appropriate • Document relationships between customer satisfaction and CTS identified items • Complete a CTS scorecard • Conduct peer review • Obtain project champion approval • Identify the transfer function The Characterize Phase This phase requires the output of the define phase — particularly the Kano diagram since we are trying to establish four particular items: 1) modeling function (functions vs. constraints), 2) function structures (activity diagrams, flow chains, Y-function matrix, function-function matrix), 3) ideal function, and 4) metrics for ys (function measurement, Y-y matrix). How do we do this? Tools to consider: • Concept selection • Pugh selection • Value analysis • System diagram • Structure matrix • Functional flow • Interface • QFD • TRIZ • Conjoint analysis
SL316XCh13Frame Page 432 Monday, September 30, 2002 8:06 PM
432
Six Sigma and Beyond: The Implementation Process
• • • • • • • •
Robustness Reliability checklist Signal process flow diagrams Axiomatic designs P Diagram Validation Verification Specifications
Function modeling — a form-independent statement of what a product or service does, usually expressed as a verb-noun pair or an active verb-noun phrase pair. (For more information on this, see volume 6.) There are two options for using function modeling: 1) function structures — an input/output model of functional nodes interconnected by interaction flows and 2) function trees — a hierarchical breakdown of the overall function. All functions may have subfunctions, and several options of tools exist in defining them. Some are: FAST diagram, bottom-up trees, top-bottom trees, function structures, design structure matrices, finite-state machines, HartleyPirbhai diagrams, entity relation diagrams, and others. Why do function modeling? Because it: Provides the structure to enable mapping of Ys to ys to xs Enhances variability analysis through decomposition Ensures complete and accurate identification of all factors by concentrating on what and not how Provides a direct relationship to customer needs Provides a physical model of the system Links the functions with the left portion of the design FMEA Functions vs. constraints • Whereas a function is what the product or service does, the constraint is a property of the system, not something the system does. For example, cost, reliability, weight, appearance, etc. are all constraints since none of them provides a function to the item. Typically, all elements in the system contribute to a constraint, not just one element. We cannot, therefore, add on a subsystem to improve the constraint. However, what we can do is model the constraints with metrics. • Function structures may be developed for both existing and new designs. In the case of existing systems, the majority of the work may be transferred from the FMEA. In the case of a new design, the following seven steps are recommended: 1. Create an overall function model for the product. (Remember, the flows are: energy, material, information.) • Define the overall function model — top-level definition • Define the inputs and outputs from the Ys 2. Develop an activity diagram • Define the beginning and termination points of the life cycle • Εstablish user activities • Clearly distinguish parallel and linear activities
SL316XCh13Frame Page 433 Monday, September 30, 2002 8:06 PM
DFSS Training
433
• •
Define the system boundary of the product or service For each user activity, compare the Ys and ask what device functions are needed. (Very important: if the activity is not important, or you do not know how to measure it, do not include it in the activity diagram.) 3. Map Ys (customer needs) to input flows. For each Y, relate the system’s input flows to the Y. These input flows must be acted upon by the product to achieve the Y. If there are subfunctions, list those too. List the importance level for each Y. 4. For each flow, create a function chain from input to output. (Hint: think of yourself as the flow going through the system.) Start with flows for the most important customer needs. Play act the flow. List the answers as subfunctions chained together by the flows. Carry the chain forward until the flow leaves the system or until the flow needs to interact with another flow. 5. Combine the chains into an overall function structure. Combine the chains by connecting flows between each sequence, adding subfunctions that interact or provide control states or removing subfunctions that are redundant. Combine and refine ends based on: Are the subfunctions atomic, i.e., can they be fulfilled by a single, basic solution principle that satisfies the function? Is the level of detail (granularity) sufficient to address the customer needs? 6. Validate the functional decomposition. This is a preliminary validation. However, it is important, and for each validated item, there must be either a check, guideline, or action associated with it. Are the user activities covered by the functional model? Are physical laws maintained? Are all functions from independent? Are all sub functions atomic? 7. Verify the model against the Ys (customer needs). A substep of validation is verifying that the critical Ys are represented in the functional model. Identify the subfunctions or chain of subfunctions that satisfy each Y. (Does a main function exist in this chain that addresses the Y?)
IDEAL FUNCTION
AND
P-DIAGRAM
When formulating the transfer function, the challenge is to select metrics for ys that represent intended function whenever possible. Then, optimizing for y will maximize the intended function and automatically minimize energy flowing to unintended functions; the metrics may come from energy, material, or information flows. To complete the transfer function y = f(x,n) we need to identify control factors (xs) and
SL316XCh13Frame Page 434 Monday, September 30, 2002 8:06 PM
434
Six Sigma and Beyond: The Implementation Process
noise factors (ns). The energy, material, and information flows in function structures will help identify potential xs and ns. The final list of signal-control and noise factors is typically captured in the P diagram.
IDENTIFYING TECHNICAL METRICS The major objective of the characterize phase is to convert customer Ys into the engineering world as critical metrics — ys. The process for this conversion is to • Relate customer needs (Ys) to functions, then determine the criticality of each function. (Current information: Ys → functions from function structures. To help us in this, we may use the “map function of needs,” which shows the flow from Ys to the flow to function chain to primary function.) • Create the matrix representation: — the creation of a Y matrix to determine the criticality of each function. (Rows are Ys; columns are functions. A cell entry shows that there is a relationship between Y and function. The degree of this relationship is identified with a rating of importance.) • Create a Y-function matrix and QFD. Very important to note: the Yfunction matrix is not the QFD matrix! These are two distinct matrices. The first one is the Y-function; the second is the function y. Creating both of these will form the QFD matrix. The flow is represented as: a) Ys to function, b) functions to ys, and c) Ys to ys, resulting in the QFD Y to y matrix. • Create customer need importance matrix — rows are Ys; columns are functions. • Distribute importance — use importance scale criteria. A typical one is 1–5. • Calculate function weights — a standard QFD practice. However, this is not yet an allocation to ensure criticality. Make sure you check the extreme values. (We are looking for a one-to-one relationship — one function, one Y.) To check ask the following questions: 1. Does each important Y have at least one important function to cover it? 2. Is there only one way associated with each important function? • The relationship meaning is to tell us how much proportional increase in Y is gained with an increase in functional performance, as opposed to how much the Y is related to the function. • Identify critical functions and interfaces. The steps are: a) Measure functions — for each critical function examine the input /output subfunctions; examine the input and transformed (output) flows. The result could be a y. b) Measure interfaces — for each high interface in the matrix ask 1) what is the flow? 2) why is there a problem or benefit? 3) what can be measured to quantify this flow? These are possible ys. c) Organize all possible ys into an effective set of ys — take the possible ys from all functions and interactions and create a new
SL316XCh13Frame Page 435 Monday, September 30, 2002 8:06 PM
DFSS Training
435
relationship matrix with rows y (customer needs) and columns y (x,n) and rank them accordingly. d) Check for basic quality — for each function and interface ask: if this function or interface fails, how does the customer become dissatisfied? The answer is a new possible Y, the failure mode. In fact, it measures a latent, basic quality. For each new Y determine its importance. Critical to quality — we identify the CTQs for xs and ns by: a) identifying y importance, b) flow tracing, and c) understanding the system boundary. Typical tools used in this phase are: P diagrams, DOE, correlation, regression, flow analysis, known equations, simulation tools, sensitivity analysis. • Sensitivity analysis works with numerical models, not hardware. It determines how a solution varies locally about a point in the input space (noise or control). The mathematical question is whether the derivative is large or small. Mathematically, is the derivative times variation range large or small? The reader will notice that this sensitivity analysis is quite different than the DOE sensitivity analysis that runs the inputs throughout the entire ranges, not through small and local variations. Three approaches are typical in conducting sensitivity analysis: 1) take mathematical derivatives and analyze the equation, 2) do limit analysis by hand, and 3) use simulation software — Monte Carlo or something else. Generating concepts — when existing designs are inherently nonrobust, new concepts should be required. After all, the goal of Six Sigma is to design systems and products that provide precision performance without precision components. This is true because precision components are expensive. So, rather than jumping in the bandwagon of changing components — which, by the way, we can change any time — we should be looking at changes in a) parameter optimization (Taguchi is very useful here), b) improvement in manufacturing process, c) increase in component precision, and d) tolerance tightening. The basic process for generating concepts is: 1) understand the primary customer need and engineering specifications, 2) decompose the product functions, 3) search for solutions for product functions and architecture, and 4) combine solutions into concept variants. Typical intuitive concept generation methods are: • Brainstorming • Mind mapping • Method 6–3–5 – the process is conducted based on the following seven steps 1. Arrange team members around a table. 2. Each member writes three ideas for the primary function — usually five or less. The ideas are expressed clearly and positioned on a large piece of paper in thirds or fourths depending on the number of ideas. 3. After x minutes of work on the concepts, members pass their ideas to the person on their right.
SL316XCh13Frame Page 436 Monday, September 30, 2002 8:06 PM
436
Six Sigma and Beyond: The Implementation Process
4. For the next x minutes, team members modify, without erasing, the ideas on the sheet, with the option of adding an entirely new concept. 5. Passing of the idea sheets continues until original sheets return, and the round ends. With sufficient time intervals, the process is repeated five times. 6. After generating ideas for each of the primary product functions, the entire process is repeated to develop alternative layouts and combined concept variants that utilize a summary of the solution principles generated for each function. 7. The ideas are accumulated and processed accordingly. • Morphological analysis — the process for this analysis is very simple and it involves a) listing important functions, b) listing each important subfunction, c) identifying the current solution, d) generating new ideas for each subfunction, and e) configuring and laying out permutations. • Directed and logical methods • Design catalogs • Functional tolerancing • TRIZ • Evaluating concepts • Pugh analysis — a process of evaluating design concepts against identified criteria using the analysis to identify additional alternatives and selecting one or more concepts for further refinement or development. It is used as the basis for trade analysis. • Trade study — a trade study is a technical analysis comparing the technical, cost, risk, and performance attributes of two or more competing alternative solutions against a predefined set of evaluation criteria, in order to define the optimum solution. The process is: • Define decision • At correct level. • Consistent with prior decisions based on user needs. • Define evaluation criteria • Measurable and understandable (results oriented). • As mutually independent as possible (avoid redundancies). • Consistent with policies and regulations and organizational (internal and external) constraints. • Categorize evaluation into shalls and targets • Shall – mandatory for success. • Target – desirable, but not mandatory. • Determine weighting factors and obtain team buy-in • Determine relative importance for each must and want. • Obtain multiple opinions from team on each weighting factor. • Assign weighting values. • Assess factors and weights. • Force rank.
SL316XCh13Frame Page 437 Monday, September 30, 2002 8:06 PM
DFSS Training
437
• List set of alternative solutions • List Pugh analysis. • Review prior similar designs or products. • Define raw score for each function • Base scoring on data when possible. • Use expert engineering judgment. • Obtain several opinions and combine to obtain a value. • Screen alternatives through “shalls” • Be realistic in your requirements. • May not have any “shalls,” which is OK. • Eliminate any alternative that does not meet all “shall” criteria. • Keep “close calls” available for further study. • Compute weighted score for each alternative • Multiply raw value by weighting factor. • Record in a table the weighted value. • Sum factors for each alternative. • Assess risk of implementing highest value solutions based on • Customer preference • Investment • Product/service performance • Project • Introduction • Dollar overrun • Understand the sensitivity of the score and the raw values and weighting factors • Make recommendation or decision Verification and optimizing planning • Create program-specific reliability and robustness checklist (a very good practice to follow) • Develop a P diagram — identify all parameters (factors and noises) • Generate experimental (Latin hypercube) samples (DOE) • Run a CAE model to calculate response • Create response surface model (RSM) • Perform reliability assessment (probability analysis) • Perform robust optimization (minimize variability (σ) and target (µ)) • List CTS Ys as functional requirements • Identify potential error states (failure modes) from P diagram or FMEA • List xs and ys as design parameters that deliver the CTS Ys • Identify potential noises from the five generic categories • Generate failure mode vs. noise interaction matrix • Initiate noise factor management strategy for key characteristics controlled by manufacturing (piece-to-piece) • Initiate potential test strategy, capturing information from design verification • Indicate relationships between failure modes and test strategy
SL316XCh13Frame Page 438 Monday, September 30, 2002 8:06 PM
438
Six Sigma and Beyond: The Implementation Process
• Indicate relationship between failure modes and test strategy (test strategy must address failures) • Indicate relationship between noises and test strategy (test strategy must address important noises) • Run tests and show results Deliverables/checklist of the characterize phase • Add necessary suppliers to cross functional team • Model system function • Identify critical functions and interfaces (interfaces are: energy transfer, physical proximity, information transfer, material) • Select metrics related to intended function • Establish transfer function relationships both in terms of Y = f(x) and Y = f(x,n) • Formulate noise management strategy • Initiate verification planning using reliability and robustness checklist (remember: this is an organization dependent score card) • Enter data in design and manufacturing scorecard (this is also organization dependent) • Conduct peer review • Obtain project champion approval • Document information related to transfer functions
WEEK 2 The Optimize Phase Review week 1 General content questions Specific project questions Definition of optimize — the classic dictionary definition is: to make as good or effective as possible or to make the most effective use of. In the DFSS methodology this means that we have to answer three questions: 1) How can we satisfy multiple constraints? 2) What operating region is least sensitive noise? and 3) What settings of the factors give us the target response? To answer these questions typical tools and methodologies we use are: • Parameter and tolerance design • Simulation • Taguchi • Excel’s solver • Statistical tolerancing • QLF • Design and process FMEA • Robustness • Reliability checklist
SL316XCh13Frame Page 439 Monday, September 30, 2002 8:06 PM
DFSS Training
439
• Process capability • Gauge R & R • Control plan The purpose, then, of the optimize phase is to • Satisfy and delight the customer • Optimize to achieve desired target and variability levels in metrics critical to satisfaction, i.e., Ys. The greatest opportunity for optimization is early in the design. In this stage, the range over which nominal values of xs may vary and, therefore, the opportunity for optimization. We optimize by searching for control factor (x) settings that satisfy the constraints, make responses (Y or y) insensitive to noise, and ultimately achieve target response. The requirements for optimization then are: • For hardware experimentation • Range • Shifts or patterns over time • Physical understanding of failure modes induced by noise • For analytic experimentation • Mean • Standard deviation or range • Shift or patterns over time Optimization approaches • Mathematical programming. What is mathematical programming? In a mathematical programming or optimization problem, we seek to minimize or maximize a real function of real or integer variables, subject to constraints on the variables. The term mathematical programming refers to the study of these problems: their mathematical properties are the development and implementation of algorithms to solve these problems and the application of these algorithms to real world problems. Note in particular that the word “programming” does not specifically refer to computer programming. In fact, the term mathematical programming was coined before the word programming became closely associated with computer software. This confusion is sometimes avoided by using the term optimization as a synonym for mathematical programming. In DFSS we use this approach with an explicit transfer function or model that can be incorporated into an automated optimization algorithm. This is where Excel’s solver will work very well. For example: given y = f(x1, x2,…xk, n1, n2, …nm) minimize σy such that: y = T (target) or T – b < y < T + b; lower range limitj < xj < upper range limitj; lower range limitj < nj < upper range limitj; By changing x1, x2,…xk. Excel solver will do the rest. • Experimentation (statistical methods) • Orthogonal arrays • Response surface methods • Sequential experimentation • Design and analysis of computer experiments
SL316XCh13Frame Page 440 Monday, September 30, 2002 8:06 PM
440
Six Sigma and Beyond: The Implementation Process
• Heuristics • Genetic algorithms (for a plethora of information see: www. aic.nrl.navy.mil/galist) Variability: transmission from x to y – In volume 6 we talked about the mathematics of DFSS sigma. Let us recall that 12
2 ∂y 2 2 ∂y 2 ... + + σ y = ( ) σ x1 + σ x2 ∂x1 ∂x2
While the focus of the DMAIC model is to reduce σx21 and σx22 (variability), the focus of the DCOV is to reduce the (∂y/∂x) (sensitivity). This is very important, and that is why we use the partial derivatives of the xs to define the Ys. Of course, if the transformation function is a linear one, then the only thing we can do is control variability. Needless to say, in most cases, we deal with polynomials, and that is why DOE and especially parameter design are very important in any DFSS endeavor. We want to exploit nonlinearities in the transfer function. • Variability, noise, reliability, and robustness • Variability — the performance of products vary around the intended target due to variability (noise) in manufacturing, operating conditions, etc. • Noise — manufacturing, deterioration, neighboring systems, customer usage, and environment. • Reliability — probability of a product performing its intended function for a specified life under the operating conditions encountered. • Robustness — capability of a product to perform its intended function consistently in the presence of noise during its intended life. Robust designs produce tight distributions around a target, which minimizes quality loss. (Note: prototype analysis does not verify robustness. It verifies functionality on a single, usually hand-selected, sample.) • Taguchi’s optimization rules • Step 1. Reduce variability • Step 2. Adjust to target (mean or slope) • Traditional DOE • Full factorial experimental designs • Fractional experimental designs • Robust DOE – Taguchi • Ideal function • Noise strategy • Signal-to-noise ratio • 2 step optimization • Confirmation
SL316XCh13Frame Page 441 Monday, September 30, 2002 8:06 PM
DFSS Training
441
Differences between analytical and physical experimentation • No uncertainty (random error) in the experimentation. Factorial experimentation and its statistical analysis are designed to cope with this issue. Replication is useless. • Usage of same-level variable more than once generates “pseudo-replicates” and may be a waste of time in analytical experiments. • Experiment logistics are not an issue (parameter values can be adjusted easier than in hardware), but minimum number of computer runs is still very important. • Nonlinearities can be fully exploited as opposed to linear model (two or three levels factorial experiments). • In addition to response, sensitivities (derivatives) are easily attainable in some cases. • Iterative optimization is practical and often preferred. • Statistical information of noise factors are assumed known (mean, standard deviation, etc). • When noise-factor representations are limited in the model, surrogate noises must be used. • Parameter and tolerance designs can be easily performed simultaneously. Deterministic analysis • Inputs • Nominal or worst-case values of dimensions, materials, loads, etc. • Process • Finite element analysis • Numerical model • Multivariate techniques • Regression equation • Outputs • Point estimates (performance, life) • Safety factor or design margin • Limitations • Limited incorporation of real-world variability • Luck of up-front robustness • Poor correlation to hardware test performance • No opportunity to do robust design Analytical robustness • Inputs • Nominal or worst-case values of dimensions, materials, loads, etc. • Process • Finite element analysis • Numerical model • Multivariate techniques • Regression equation
SL316XCh13Frame Page 442 Monday, September 30, 2002 8:06 PM
442
Six Sigma and Beyond: The Implementation Process
• Outputs • Point estimates (performance, life) • Safety factor or design margin • Challenges in analytical robust design • Many CAE models have limited capability to represent real-world noise; therefore, surrogate data must be used. • Stochastic information of noise factors are assumed known. • Many CAE models are computationally expensive. • A large number of design parameters and large design space are often considered; therefore, nonlinear relationship between input and output is common. • Many CAE models focus on error states; therefore, a large-scale multiobjective optimization is often needed. • In early product development where analytical robustness is applied, design objectives and constraints are still fluid and will more likely change. Understanding process data • Rational subgroup • Center of means • Short vs. long capability • Process stability — a process is considered stable when it consists of only common cause variation. (Note: one cannot design for Six Sigma if the process is unstable.) • In control vs. out of control • General discussion • Discussion of quality loss function (QLF) • Experimental strategies • Parameter design • Tolerance design • ANOVA • Discriminant analysis • Statistical tolerancing — a bundle of tools and methodologies used in DFSS to optimize the design. Typical tools used are • DOE • Latin hypercube sampling — a sampling method for uniform spread of design points in the region without replicates and can be considered as an extension to Latin Squares to multidimensions • Multivariate adaptive regression spline (MARS) is one of several modern regression tools that can help analysts quickly develop superior predictive models. Suited for linear and logistic regression, MARS automates the model specification search, including variable selection, variable transformation, interaction detection, missing-value handling, and model validation. MARS is a nonparametric modeling tool that is equally adept at developing
SL316XCh13Frame Page 443 Monday, September 30, 2002 8:06 PM
DFSS Training
443
simple or highly nonlinear models. MARS rapidly separates effects that are applicable to an entire data set from those that apply only to specific subsets, automatically tracking nonlinear effects with spline basis functions. Models enhanced with MARS-created variables are typically far more accurate than handcrafted models. In essence, MARS is a flexible nonlinear regression method that automates the building of predictive models. Also, it automatically builds a model-free (nonparametric) nonlinear model based on a set of data. This automatic and model-free feature is very preferable when there is lack of knowledge of a possible parametric model for which the construction of parametric nonlinear regressions or differential equations is very difficult or time-consuming. Its general structure is: f (x) = bo +
∑ b B ( x ) + ∑ b B ( x , x ) + ∑ b B ( x , x , x ) + ... i
i
i
m
m
i
j
n
n
i
j
k
Where bo = constant term bi , bm , bn , = coefficients Bi (xi) = single-term function Bm (xi,xj) = a two term interaction function Bn (xi,xj,xk) = a three-term interaction function • Gaussian stochastic Kriging (GSK) is the modeling approach that treats bias (systematic departure of the response surface from a linear model) as the realization of a stationary random function. In DFSS, the GSK model is used to improve lack of fit [ε(x)] and can be represented as follows: ε( x ) = β + z ( x ) where β is an average error and z(x) is a realization of a random Gaussian process, Z(x), with zero mean and covariance between two sets of input xi and xj as d
Cov(z(xi), z(xj)) = σ2 R(xi,xj); R(xi,xj) = exp{ −
∑θ
k
(xik – xjk)p}
k =1
• Adaptive sequential experiments — used to build a surrogate model sequentially with a desired accuracy. It helps avoiding over sampling. It is very beneficial in DFSS studies for expensive computer experiments. However, sometimes when validating a surrogate MARS model with a new sample, there are cases in which models generated sequentially were worse than models generated with completely new DOE
SL316XCh13Frame Page 444 Monday, September 30, 2002 8:06 PM
444
Six Sigma and Beyond: The Implementation Process
matrices. If this happens, it may be necessary to perform a trade-off study between the cost of additional CAE runs and accuracy of a surrogate model. • Model validation — used to ensure the required accuracy of a surrogate model compared with the corresponding CAE model. Reliability and robustness assessment • Concepts • Limit state — a demarcation in design variable space to separate acceptable and unacceptable design. Mathematically, the limit state of a system function, f(x), may be expressed as: L = f(x) = f(x1, x2,…,xn), where x1, x2,…,xn are design variables. • Most probable point (MPP) — defined in standard normal variable space. It is the point on the limit state with the maximum joint probability density. It also has minimal distance to the origin from the limit state. Probability assessment using the MPP: let u be a vector of n random variables in standard normal space. MPP is the solution to the following optimization problem. Minimize u • u subject to g(u) = L. The distance β from the origin to the MPP may be used to assess the probability of g(u) > L. Pr[g(u) > L] = Φ(–β), where β2 = u1*2 + u2*2 +,…, + u*n2 +,…, + u*n2 and Φ(.) = the cumulative standard distribution. • When a variable does not follow standard normal distribution, it is transformed into standard normal variable by quantile–quantile relationships. In mathematical terms, it is shown as: F(xi) = Φ(ui), where F(xi) is the cumulative probability distribution function and Φ(ui) is the cumulative standard normal distribution function. Once the work is done in u space, xi can be found for the corresponding ui by inverse transformation using quantile-quantile relationship – xi = Fi –1 [F(ui)] • % contribution — the overall impact of the variation of a variable to the variation of the functional performance. Mathematically, it is defined as ∂y 2 2 ) σ xi ∂xi % contribution (xi) = x100% ∂y ∂y 2 2 ∂y 2 2 ( )2 σ2x1 + ( ) σ x2 +,... + ( ) σn ∂x1 ∂x2 ∂x n (
and it consists of two key elements: 1. (Deterministic) Sensitivity of the variable to the performance function ∂y/∂xi. It is the slope of the function at the point of interest with respect to variable xi. 2. Variability of the variable, σx2i
SL316XCh13Frame Page 445 Monday, September 30, 2002 8:06 PM
DFSS Training
445
Methods to quantify probability of failure • Root sum square (RSS) — the first order approximation method. It is also called first order reliability method (FORM). The statistical distribution of the response is approximated by normal distribution. The mean value of the response is the function of the mean value of the variables, and the standard deviation of the response is the RSS of products of the partial derivative and the standard deviation of a variable as
µ y ≅ f (µ xx , µ x2 ,..., µ xn ; σ y = (
∂f 2 2 ∂f 2 2 ∂f 2 2 ) σ x1 + ( ) σ x2 + ... + ( ) σ xn ∂x1 ∂x2 ∂x n
L −µ x P( y ≤ L ) = Φ σ y It is important to note that from the properties of normal distribution RSS provides an exact solution when all the design variables are independent and normally distributed and the performance function is linear. For all other cases, RSS provides an approximation. • Successive linear approximation method (SLAM) — a general-purpose algorithm for finding MPP. Once MPP is found, probability of failure can be assessed. • Monte Carlo and quasi Monte Carlo • Monte Carlo is a statistical method to quantify the statistical characteristics of a functional performance through a sampling of the statistical characteristics of its variables. • Quasi Monte Carlo is a statistical method that uses quasi random sequences that have a better uniformity and help to converge faster than random sequences. (It is much faster than the traditional Monte Carlo approach in integrating high-dimensional problems.) Functions of robustness and reliability checklist. We have already mentioned this checklist several times and have noted that this is an organization-dependent item. Because of its importance let us identify here some of the applications of this checklist: • Identify functional requirements and the associated error states that are a condensed quality history of prioritized issues. • Identify important noise factors associated with error states and assess weak-to-strong linkage to error state. • Identify mapping noises to test plan to ensure system is tested against critical noises and that unresolved error states are indicated by the tests. (Important noise factors include the manufacturing tolerance around some critical xs.)
SL316XCh13Frame Page 446 Monday, September 30, 2002 8:06 PM
446
Six Sigma and Beyond: The Implementation Process
• Selecting a noise-factor strategy to ensure design is robust to critical noise factors. (This is an engineering step, which could include design change and development work.)
DESIGN
FOR
PRODUCIBILITY
In Volume 6, we discussed the issue of producibility. The main goal was to make a product or service insensitive to noise. We recommended that some appropriate tools may be DOE, parameter design, statistical tolerancing, FMEA, GDT, DFM/DFA, TRIZ, mistake proofing, control plan, etc. We also suggested that when reviewing program assumptions, inconsistencies between product and process should be identified so that the “gap” may be identified. (Both DFMEA and PFMEA are excellent tools to identify opportunities for closing the “gap.” For DFSS purposes, we are interested in comparing the required manufacturing capability for critical xs with process capability.) In selecting the strategy from a DFSS perspective, the objective of DFSS in producibility is to come up with a strategy that leads to improving customer satisfaction at the lowest cost and that supports design and process verification timing. The following are some recommendations for achieving this goal: • Consider what is possible and most cost-effective at the particular time of application. • Adopt what has been done (tools and or results) from “things learned” that has increased satisfaction and productivity or use robustness mentality to make product and process insensitive to noise. • Use finite element analysis or other analytic tools instead of hardware experimentation (it is too late and too expensive if you wait for hardware). • Learn to limit manufacturing-induced product noises when they result in more rapid product deterioration. • Learn to always describe and communicate changes and actions adopted to appropriate personnel.
DELIVERABLES/CHECKLIST • • • • • • • • • • • •
FOR THE
OPTIMIZE PHASE
Add necessary engineers to cross-functional team. Generate new concept, if needed. Complete P diagram. Quantify process capability (µ and σ) for CTQ xs. Complete derivation of the transfer function that includes CTQ xs. Identify target nominal values for xs. Design for ease of assembly and manufacture, resolve related concerns. Change process capability to achieve appropriate σx (business decision). Update control plan. Update validation planning in the reliability and robustness checklist. Review known design impacts on customer satisfaction. Conduct peer review.
SL316XCh13Frame Page 447 Monday, September 30, 2002 8:06 PM
DFSS Training
447
• Obtain project champion approval. • Document information learned.
THE VERIFY PHASE In the verify phase, there are three areas of concern: 1) design verification — verifying through a series of well planned tests that verify whether or not a product as designed functions in the manner in which it is required to function, 2) production validation — verifying through a series of well planned tests that verify whether or not a product as produced performs in the manner the designer intended, and 3) process validation — verifying through a series of well planned tests that verify whether or not a manufacturing process is capable of producing product to its engineering specification. In each of these areas, however, the objective is to validate the design by demonstrating that it meets the functional and reliability requirements set in the D, C, and O phase. Typical tools used in the verify phase are: • • • • • • • • •
Assessment (validation and verification score cards) Design verification plan and report Robustness reliability Process capability Gauge R & R Control plan FMEA QFD P diagram
Steps in the verification process: 1. 2. 3. 4. 5. 6. 7.
Update or develop test plan details. Conduct test. Analyze/assess results. Make sure design passes requirements. Develop failure resolution plan. Record action on design verification. Complete the design verification.
STEP 1: UPDATE/DEVELOP
TEST PLAN DETAILS
Objective: to develop a program-specific design-verification plan to demonstrate that all customer functional and reliability requirements have been met. The risk, of course, of not doing either the updating or the development is to settle for uncertain reliability, unverified functions, timing issues, unaccountability in resources and content, as well as risking the possibility of a program-specific test plan that may not capture customer requirements.
SL316XCh13Frame Page 448 Monday, September 30, 2002 8:06 PM
448
Six Sigma and Beyond: The Implementation Process
• Inputs • Functional/reliability targets • FMEA • Design verification from the define phase • Any remaining inputs from define phase • Test matrix correlating failure modes/noises/requirements • Customer usage information • Gaps from define phase (if any) • Responsibility for providing input • Cross-functional team • Design and release engineer • Supplier • Technical specialist • Product integrator/systems engineer • Update/develop test plan details: how? • Update design verification plan (DVP) details while developing design as needed for lower-level (e.g., components, subsystems) verification. • Update key life testing (KLT) and customer-correlated tests based on functional target information from define phase. • Complete standard DVP form. • Define sample sizes and test duration required to demonstrate reliability targets, test metrics (i.e., MMBF, MTTF, etc.). • Identify type of test (e.g., test-to-failure, bogey, degradation). The preferred method is test to failure. • Implement reliability growth plan. • Identify the needed test facilities/resources/timing. • Review/update CAE models and associated noise-simulation strategies. • Identify plan for use of analytical/CAE testing (wherever possible, plan for computer simulation/CAE). • Who is responsible for doing the update and development test plan? • Cross-functional team • Design and release engineer • Supplier • Technical specialist • Product integrator/systems engineer • When? • Within the timing milestone system for the organization • Outputs • Signed-off program-specific DVP • Identification of facilities and resources • Total program product test plan • Program specific KLT and customer-correlated tests. Key life testing is an accelerated testing method that focuses on the major stresses and principal noise factors that drive loss of function. Specifically, KLT is
SL316XCh13Frame Page 449 Monday, September 30, 2002 8:06 PM
DFSS Training
449
used to a) verify design, b)compare designs, c) benchmark the competition and or best practice, and d) confirm and predict reliability. The test should also uncover failure mechanisms associated with real-world usage over the design life. • Updated or adapted CAE models
STEP 2: CONDUCT TEST Objective: to conduct the tests specified in DVP. The consequence of not doing it is the risk that reliability may not be verified appropriately. Inputs • DVP • Total program project plan • Components/systems subsystems/complete project to be tested • Test procedures as appropriate and applicable to specific project • Responsibility for providing input • Project teams • Suppliers • Conduct test: how? • Request facility time by initiating test. • Provide written description to test technician, technical specialist, supplier, and engineering management of the test purpose/focus/failure modes. • Obtain hardware for testing (components/subsystems/systems/vehicle). • Ensure rigorous adherence to test procedure. • Use software to manage scheduling and timing review the progress of KLT and customer correlated tests being conducted at test facilities. • Execute analytical/CAE testing plan. • Capture all relevant test observations and information (ensure that technician has total project awareness of customer failure perceptions). Component subsystem testing most likely occurs simultaneously with design development; wherever possible, use computer simulation/CAE instead of physical experiments. • Who? • Project teams • Suppliers • When? • Within the timing milestone system of the organization. • Outputs • Tests completed as specified in DVP. • All failures reported. • Incident report/test results (should include hard and soft failures). • Parts for analysis. • Test data load into reliability database.
SL316XCh13Frame Page 450 Monday, September 30, 2002 8:06 PM
450
Six Sigma and Beyond: The Implementation Process
STEP 3: ANALYZE/ASSESS RESULTS Objective: to determine whether test results demonstrate that requirements and targets are met at a specified reliability level. The risk of not doing this is that from a quantitative perspective both reliability and performance will remain unknown. Inputs • Test results • Suspect parts • Incidence reports • Responsibility for Providing input • Program team • Suppliers • Technical specialists • Analyze/assess results: how? • Perform statistical/graphical analysis of function vs. requirement. • Perform failure analysis on all failed or suspect parts. • Assess failure risk. • Who? • Program team • Suppliers • Subject matter experts • When? • Within the timing milestone system for the organization • Outputs • Failure resolution sufficient to meet requirements and targets • Reliability growth chart • Determination of whether the system or part meets the functional and reliability requirements. The focus here is to improve robustness. Five options are generally available: 1) change the technology to be robust; 2) make basic current design assumptions insensitive to the noises through parameter design, by beefing up the design (upgrading design specifications), and redundancy; 3) reduce or remove the noise factors (this may need additional DOE); 4) insert a compensation device; and 5) send the error state somewhere else where it will create less harm (disguise the effect). • Reliability demonstrated quantitatively. • Weibull plot • Degradation curves
STEP 4: DOES
THE DESIGN PASS REQUIREMENTS?
Objective: to identify which systems need to go through failure resolution and which move on to sign-off. The risk of not doing this step is that nonsegregation of targets “met” vs. “not met” causes ambiguity of remaining verification task.
SL316XCh13Frame Page 451 Monday, September 30, 2002 8:06 PM
DFSS Training
451
• Inputs • Results from previous step • Functional targets from define stage • Project specific specifications • Reliability growth chart • Weibull plot • Degradation curves • Responsibility for providing input • Program team • Suppliers • Technical specialist • Does the design pass requirements? How? • Review test results from previous step to verify the design and determine whether the system or part meets the functional and reliability requirements. • Ensure that failure resolution meets requirements and targets. • Who? • Program team • Suppliers • Subject matter experts • When? • Within the timing milestone system for the organization • Output • Decision whether or not design meets the requirements and targets • Update lessons learned
STEP 5: DEVELOP
FAILURE RESOLUTION PLAN
Objective: to develop failure resolution plan including corrective actions and modified verification plan. The risk of not doing this step is that appropriate corrective actions will not be identified or verified. • Inputs • Test failures • Failed parts • Incident reports • Diagnostic information • Repair information • Responsibility for providing input • Project teams • Suppliers • Develop failure resolution plan: how? (Make sure to address root cause, not symptom. Also, do not get into the habit of retesting the same part to try to get acceptance.)
SL316XCh13Frame Page 452 Monday, September 30, 2002 8:06 PM
452
Six Sigma and Beyond: The Implementation Process
• Failure investigation–fault isolation–failure analysis–root cause determination–define corrective action–define corrective action verification requirements (retest requirements) • Initial assessment of corrective action • Update FRACAS or other appropriate database • Who? • Project teams • Suppliers • When? • Within the timing milestone system for the organization • During test program (DVP) as required • Outputs • Corrective action plan approved and verification based on root cause analysis • FRACAS incident and concern databases update with corrective action status
STEP 6: RECORD ACTIONS AND RECORD (DVP&R)
ON DESIGN VERIFICATION PLAN
Objective: to document failures and corrective actions in DVP&R. The risk of not doing this step is that the current program risk status may not be clear and repeat mistakes if lessons learned are not captured. • Inputs • Failure summary report • Corrective action plan • Responsibility for providing input • Project teams • Suppliers • Record actions on DVP&R. How? • Capture failure summary and reverification information in DVP&R. • List corrective actions in DVP&R. • Identify ongoing product test status, i.e., pass/fail/failure mode. • Update program-specific FMEA. • Who? • Project teams • Suppliers • When? • Within the timing milestone system for the organization • Outputs • Program-specific FMEA updated • Test results corrective actions shown on standard DVP&R form • Go to Steps 2 and 3 for conducting revised test and analyzing data
SL316XCh13Frame Page 453 Monday, September 30, 2002 8:06 PM
DFSS Training
453
STEP 7: COMPLETE DVP&R Objective: to complete and sign off on the DVP&R. Not doing this step may jeopardize the project since the risk status may be unclear. Individuals who may sign off are: project integrator, system engineer, product engineer, or functional manager. • Inputs • Test information • Results • Timing • Sample size • Remarks • Statistical test confidence • Responsible for providing input • Supplier • Complete DVP&R: How? • Assess how well functional and reliability targets are being met. • Make risk assessment. • Sign off on design verification plan and report. • Who? • Supplier • Project team • When? • Within the timing milestone system for the organization • Output • Completed DVP&R signed off by all appropriate and applicable personnel • Documented test results that meet target and requirements and risk assessment with corrective action recommendation • Lessons learned feedback Deliverables/checklist for the verify phase • Add necessary engineers to cross functional team. • Complete reliability and robustness checklist. • Compare verification results to phase D, C, O results and resolve. • Enter actual test data in the scorecard. • Capture lessons learned in system design specification and component design specification. • Conduct peer review. • Obtain project champion approval. • Update scorecard over time with performance results from the field and actual process capability data. • Document information learned.
SL316XCh13Frame Page 454 Monday, September 30, 2002 8:06 PM
454
Six Sigma and Beyond: The Implementation Process
SELECTED BIBLIOGRAPHY Frantzeskakis, L. and Powel, W. B. (1990). A successive linear approximation procedure for stochastic, dynamic vehicle allocation problems. Transportation Science. Vol. 24. No. 1. Pp. 40–57. Friedman, J. H. (1991). Multivariate adaptive regression splines (with discussion), Annals of Statistics. 19. Pp. 1–141. Qi, L. and Chen, X. (1995). A globally convergent successive approximation method for severely nonsmooth equations, SIAM Journal of Control Optimization. Vol. 33. Pp. 402–418. Roosen, C. B. and Hastie, T. J. (1994). Automatic smoothing spline projection pursuit, Journal of Computational and Graphical Statistics. Vol. 3. No. 3. Pp. 235–248.
SL316XCh14Frame Page 455 Monday, September 30, 2002 8:06 PM
14
Six Sigma Certification
Teachers have it. Accountants have it. Doctors and attorneys have it. What is it? Certification. Some call it certification, others call it passing the Bar or Board exams, etc. Whatever it is called, the essence is the same — somebody somewhere has decided that homogeneity in the profession of choice would be “guaranteed” by certification. The certification process follows a general course: a) very prescriptive educational coursework, b) a standardized test of knowledge of the given profession, and, in some cases, c) a test with an application portion attached as part of the certification. (The application form of the test is generally given as a paper test simulation of a case problem.) The result, of course, is competency. But competency may be defined in many ways depending on who is doing the measuring. For example Skinner (1954, p. 94) defined the path to competency as follows: “The whole process of becoming competent in any field must be divided into a very large number of very small steps, and reinforcement must be contingent upon the accomplished of each step. The solution to the problem of creating a complex repertoire of behavior also solves the problem of maintaining the behavior in strength … By making each successive step as small as possible, the frequency of reinforcement can be raised to a maximum, while the possibly aversive consequences of being wrong are reduced to a minimum.” This is a very interesting point of view. The reader will notice that it promotes a theory of motivation as well as one of cognitive development. Motivation is external and based on positive reinforcement of many small steps. Cognitive development, on the other hand, is based on what is learned, how it is learned, and whether or not the evaluation of learning is consistent and uniformly administered (Madaus, West, Harmon, Lomax, and Viator, 1992; Whitford and Jones, 2000). In the field of quality, a recent movement has been to create a Six Sigma certification, or what we call “discipline envy.” Some of you may be familiar with this concept. It involves an individual or group (formal or informal) wishing to model itself on, borrow from, or appropriate the terms, vocabulary, and authority figures of another discipline. To be sure, anthropomorphism has its uses. And for me, “discipline envy,” which is very much a part of life in my own academic discipline of statistics and instructional technology, the world of quality, etc., is a kind of fantasizing about an “ego ideal” elsewhere. Discipline envy is pervasive in human history. One hundred years ago, music was without question the discipline of disciplines, the ego ideal for the arts. Before that it was architecture. However, both Schelling and Goethe compared architecture to “frozen music,” and the power of this comparison became the impetus for Walter Pater’s proclamation that “all art constantly aspires toward the condition of music.” But that is not the end. We humans
455
SL316XCh14Frame Page 456 Monday, September 30, 2002 8:06 PM
456
Six Sigma and Beyond: The Implementation Process
have an unusual idiosyncrasy, which the Germans call Anders-streben, or a desire to substitute one purity with another and so on in the name of superior result. The problem with that notion, however, is that sometimes we substitute purity for impurity in the name of cleaning the specific contamination. What does all this have to do with Six Sigma certification? Plenty! We have come to believe that as quality professionals we must be certified to be professionals because others do it. We have come to believe that certification will give us a confirmation of respect from others. We have come to believe that through certification we will demonstrate excellence in our profession — or perhaps a superior discipline altogether. We have come to believe that, without certification, we suffer some kind of “loss” and do not measure up to a more perfect and more whole discipline. You see, we have come to recognize envy as an aspect of specific idealization. The problem with that idealization, however, is that it has gone wrong. For example, the subject envies the object for some possession or quality, and the more ideal the object, the more intense the envy. At this point, envy becomes a signal of thwarted identification. When that happens, we are in deep trouble because the only solution at this point is some kind of interdisciplinarity. And so the question arises, Is it possible to have a hierarchical discipline? And if so, is it possible to identify that discipline through a rigorous certification? We believe not. Why? I am reminded of the now classic question that the Annals of Improbable Research — the only humor magazine with eight Nobel Prize laureates on its board — posed some time ago. The question was: Which field of science has the smartest people? An astronomer provided the following answer. “Speaking of ranking the various disciplines — politicians think they are economists. Economists think they are social scientists. Social scientists think they are psychologists. Psychologists think they are biologists. Biologists think they are organic chemists. Organic chemists think they are physical chemists. Physical chemists think they are physicists. Physicists think they are mathematicians. Mathematicians think they are God. God … ummm … so happens that God is an astronomer.” Let us consider the word interdisciplinary a little more closely. Interdisciplinary is a word as much misunderstood these days as multiculturalism, and for similar reasons. Both words seem to their detractors to break down boundaries and hierarchies, to level differences rather than discriminate among them, to invite an absence of rigor, and to threaten, to somehow erase or destroy the root term (culture and discipline). As Ronald Barthes in 1972 wrote, “Interdisciplinary studies do not merely confront already constituted disciplines (none of which, as a matter of fact, consents to leave off). In order to do interdisciplinary work, it is not enough to take a subject (a theme) and to arrange two or three sciences around it. Interdisciplinary study consists in creating a new object, which belongs to no one.” And now let us look at the Six Sigma methodology. What is it comprised of? A combination of many sciences, business disciplines, engineering, and several others. To expect an individual to be an expert through certification is an absurdity for several reasons.
SL316XCh14Frame Page 457 Monday, September 30, 2002 8:06 PM
Six Sigma Certification
457
The issue of expert. The general consensus regarding a shogun (master black belt) and a black belt is that they should be experts. No one as yet has been able to define expert. To know the methodology steps does not qualify somebody as an expert. An expert, at least to our thinking, is an individual who knows not only the methodology but all the tools and their applications to the trade as well. We believe that it is impossible for any one individual to know everything well. Therefore, by definition one cannot really be an expert. This is precisely the reason why Six Sigma depends on crossfunctional teams to do the actual work. The issue of statistics. We expect shoguns and black belts to know statistics to solve their problems. We teach them several out of hundreds of statistical tests in the hope that they will use the right one. We keep forgetting that several empirical studies have shown that many nonstatisticians do not fully understand the statistical tests that they employ (Nelson, Rosenthal and Rosnow, 1986; Oakes, 1986; Rosenthal and Gaito, 1963; Zuckerman, Hodgins, Zuckerman, and Rosenthal, 1993). It is absurd to think that 3 days of instruction in ANOVA or reliability robustness or any other subject can make somebody an expert in that area. The issue of interdisciplinary topics. Every black belt training course has several hours on leadership, project management, financial concepts, general quality, and many other topics. To think that at the end of the 4-week training period you have produced an expert at solving all sorts of problems in any organization would be ludicrous. These are disciplines in their own right, and it takes years to understand and implement them appropriately. The issue of the project. A black belt must select a project that is worth over $250,000 to the organization. That is fine. However, the problem is the definition of the problem, the arrival at that figure, and the subjectiveness of the analytical process. We must be honest here. In several instances, the amount, the process, and the problems solved are not really problems in customer satisfaction but rather a political agenda for management. As such the shogun and the champion confer black belt certifications as they see fit. There is no standardization.
THE NEED FOR CERTIFICATION A great deal of attention has recently been devoted to having some kind of certification, especially in Six Sigma methodology. Many articles have appeared in various professional publications (Quality Progress, Quality Digest, etc.) as well as in general publications such as the New York Times Magazine, U.S. Today, and others. The typical line taken by writers advocating the adoption of certification is that it would give coherence and direction to instruction and lead to higher levels of professional achievement. However, surrogate data suggest that certification does not improve competence. Advocates seem to assume that the adoption of such certification is needed if the profession, and especially Six Sigma methodology, is to remain “pure” and
SL316XCh14Frame Page 458 Monday, September 30, 2002 8:06 PM
458
Six Sigma and Beyond: The Implementation Process
continue providing a basic competence for the individual and, above all, consistency for all Six Sigma training providers. Various arguments are presented to support the case for certification. What is conspicuously missing from such articles is evidence to support such arguments. The only evidence presented in support of certification is the high level of performance and professionalism in other areas such as law, medicine, accounting, etc. Here, we dare to present evidence bearing on the issue of certification and then offer general comments about Six Sigma. Fortunately, such evidence has recently become available through a surrogate database. It is hoped that the evidence presented here will help people in thinking through this very important issue. The first line of evidence comes from the recently completed Third International Mathematics and Science Study (TIMSS) conducted by the International Association for the Evaluation of Educational Achievement (IEA). This was a 41-nation study of student performance in mathematics and science, along with collateral information obtained from students, teachers, and school principals via questionnaires. It is unquestionably the largest educational research study ever undertaken, with over a half million students tested in the participating countries. Initial results from the study have been reported in two volumes (Beaton et al.,1996a,1996b). Using information from the study, it was possible to establish a relationship between having a nationally centralized curriculum (certified curriculum or syllabus) and student performance. Furthermore, many of these countries tested have a national test to determine whether students have learned the material in the national curriculum or syllabus. The amazing results are in. And what a surprise! If having national standards (certifications) were a truly potent force in influencing student achievement, one would expect that students in countries having a national certification or syllabus would perform significantly higher than students from countries that do not have such national standards. This is hardly the case. While most of the participating countries do have a national certification of sorts or syllabus, there is virtually no correlation between student performance and a national certification or syllabus. In fact, some countries without national certification or syllabus had higher scores than some countries with certification. Therefore, the absence of a relationship between a national certification or syllabus and performance in specific subjects (in this study, mathematics and science were tested) raises serious questions as to whether a national certification or syllabus would lead to higher student achievement. Additional information from the TIMSS study is also highly informative about the distribution of achievement in the 41 participating countries. For example, if one excludes the five highest- and five lowest-scoring countries, the achievement of the middle 50% of the students in each country is almost wholly overlapping. Another way of viewing these results is to examine the standard deviations of the national distributions of achievements. In science, for example, the median standard deviation of countries with a national certification is 88.5, whereas the median standard deviation for countries without a national curriculum is 88. Because these results are virtually identical, one is lead to question how effective national curricula are in bringing students to a particular level of achievement.
SL316XCh14Frame Page 459 Monday, September 30, 2002 8:06 PM
Six Sigma Certification
459
The most striking finding of the study is that, despite what countries might say about their certifications (curricula), there is a high level of consistency of student performance regardless of which country’s ratings are used to obtain a percentage correct score. In fact, few countries change their position in the standings. This can be considered further evidence of the appropriateness of the tests for all countries. This is not surprising because the international tests were developed on the basis of a careful curriculum analysis of all participating countries. While it is clear that there is no relationship between a national certification (curriculum) and student performance, explaining this lack of relationship is another matter. Grant, Peterson, and Shojgreen-Downer (1996) showed that even though teachers may teach differently using prior knowledge, understanding the content in a different way and teaching in a completely different way, the scores of the students increased so much that these differences are, to say the least, striking. They suggest that even highly prescribed and detailed standards, accompanied by centrally developed tests, are no guarantee that teachers will teach the same content. A different line of evidence bearing on the issue of national standards comes from studies conducted by the National Assessment of Educational Progress (NAEP). These studies periodically test student performance in various school subjects. The 1994 NAEP report, Trends in Academic Progress (Mullis et al., 1994) presents information on student achievement in mathematics and science from 1970 to 1992. Again, increases have been noticed. However, the increase in performance has occurred despite the absence of national standards. This increase in performance is not easy to explain. (Many educators attribute the rise to increased attention and commitment to the improvement of mathematics and science curricula and instruction. The impetus for these have come from state departments of education and local school district efforts as well as national nongovernmental efforts at system reform such as Success for All, Equity 2000, Accelerated Schools, and the Coalition for Essential Schools, among others [Slavin, 1997].)
GENERAL COMMENTS REGARDING CERTIFICATION AS IT RELATES TO SIX SIGMA Competency is the goal, and no one would deny that competency is an admirable goal in any discipline, most of all in Six Sigma methodology. Certification, if done correctly, can perhaps provide standardization of knowledge, but that is about all it can do. Competency is very difficult to measure. False security about knowledge. In the last 3 to 4 years, we have pretended that black belts and shoguns (master black belts) are the new supermen of organizations. We expect them to deliver “fixes” of problems that are causing discomfort at many levels, both internal and external to the organization. We emphasize statistical thinking and statistical analysis with a sprinkling of interdisciplinary themes and hope that these items will resolve the concerns of current organizations. We have forgotten the lesson that scientists have taught us over the years — that it is a mistake to bury one’s
SL316XCh14Frame Page 460 Monday, September 30, 2002 8:06 PM
460
Six Sigma and Beyond: The Implementation Process
head in the statistical sand. In general, oil tankers arrive safely at their destinations, but the Exxon Valdez did not; in general, the world gets enough rainfall, but for a whole decade, the African Sahel did not. And in general, problem solvers do solve problems, but sometimes they do not, or, even worse, they provide the wrong solution. Wrong emphasis on learning process. We have become creatures that believe that with a specific affirmation, we can indeed reach perfection or specific competence. We have fallen victims of the Wizard of Oz story. We want somebody to give us a diploma to affirm that we have brains. We need an outsider to give us a clock so that we can believe that we indeed have a heart. We want somebody to give us a medal so that we can believe that we do indeed have courage. However, the problem with the Wizard of Oz is that all three of the characters seeking something from the wizard — the scarecrow, the tin man, and the lion — already had the qualities they were seeking. But because of external affirmation, they suddenly became “real.” In our case, we have come to believe that certification provides a specific piece of paper that gives us boasting rights about our knowledge in a particular area. Certification is a form of affirmation, a false hope. Why? Because it does not address the real issue of knowledge and competence — and in that order. We continue to generate notions that are patently absurd, and many of those silly ideas produce not disbelief or rejection but repeated attempts to show that they might be worthy of attention. Rather than focusing on the basic causes of competency, we look at effects. The irony is that the entire methodology of Six Sigma is based on “root cause,” while certification is on the “effect.” Rather than emphasizing the appropriate education and training in the school environment, we “cram” knowledge in a very limited time frame. We graduate from universities people with statistics or engineering degrees and then expect them to pass a certification exam. Something is wrong here. Either the university did not properly educate the students, or the students accepted the diplomas under false pretenses. If the university did its job, further certification is unnecessary. On the other hand, if it did not educate them properly, then it should not grant students diplomas. Political ploy. We have already discussed reputation and prestige, but it seems to us that certification as it stands today is nothing more than an issue of prestige. The issue of reputation is not even addressed. That makes it a political issue, and in the long term, it will affect Six Sigma in a negative way. Lack of absolute scales will be the demise of the current certification process. Subjectivity of the certification process. How can certification be discussed without first addressing the body of knowledge (BOK)? We know of at least four sources that define BOK quite differently: 1) the Six Sigma Academy 2) the American Society for Quality 3) the International Quality Federation, and 4) the one we have proposed both in volume 1 and in this volume. All of them have common points; however, not all of them agree
SL316XCh14Frame Page 461 Monday, September 30, 2002 8:06 PM
Six Sigma Certification
461
on all issues. So the question becomes, in which BOK are you certified? Is one better than the others? Who certifies the certifiers? How can we believe that the certification means anything at all, since the certifiers themselves are self proclaimed? The certifiers have forgotten that only other specialists can properly evaluate specialist. In the case of the six sigma, arbitrarily some organizations got together when they saw a financial bonanza and they went ahead with tests that are not even based on a common knowledge base. What do they measure? Do they imply that different organizations have different criteria and different base knowledge for certification? Is one worth more than the other? If so, by how much? [It is amazing that “discipline envy” has clouded our thinking to the point where some individual organizations have different certifications between their own divisions to the point where they do not recognize each others certification.] Accountability. By way of comparison, allow me to be provocative. McMurtrie (2001) reported that from 1997 to 2000 out of 2896 accredited colleges only five lost their accreditation, 43 were given probation, and 11 showed cause (yet none of them closed its doors). In the field of quality, how many companies do you know of that have been issued a revocation of their ISO 9000 or QS-9000 certification? How many certified Lead Auditors, Auditors, Quality Professionals, or Professional Engineers have had their certification revoked? My point is, what are the ramifications of foul play within certification? What if there is no certification? The answer, unfortunately, is nothing. Absolutely nothing! There is no accountability because, as we have already mentioned, two very important issues remain unresolved. There can be no accountability as long as there is 1) no uniform BOK and 2) no standardized training. Accountability implies standardization of process, knowledge, delivery and maintenance. In the current state of Six Sigma certification, none of these exists.
CONCLUSION Certification, we believe, at this time is totally inappropriate, even though we recognize that many organizations continue issuing certificates for black belts, and at least two not-for-profit organizations provide certifications for Six Sigma. It is unfortunate that quality organizations are trying to make money and become politically correct through a certification scheme rather than focus on making it better and more robust and at least agree on BOK. Six Sigma, whether one wants to admit it or not, is a combination of many disciplines that together can work to improve an organization and its processes, products, and services. Certification means certification in statistics, reliability, project management, organizational development, etc. The list is endless. (Remember Skinner’s definition.) When all those certifications are completed, then and only then, perhaps, can we begin to think of Six Sigma certification. Obviously, that is an impossible and unrealistic goal. What we can hope for is that quality societies and individual organizations will push for more appropriate and applicable education and training as well as a consistent base of knowledge.
SL316XCh14Frame Page 462 Monday, September 30, 2002 8:06 PM
462
Six Sigma and Beyond: The Implementation Process
On a personal level, I have become aware of a key intellectual trick or error in much, though not all, current theory that works to get participants to renounce their faith in their personal capacities, especially their own intuitions in their creativity. The trick or mistake could be called the summoning of the near enemy, and it works as follows. People often become confused on moral issues because of a particular problem inherent in human dealings, i.e., that any virtue has a bad cousin, a failing that closely resembles the virtue and that can be mistaken for it — what in Tibetan Buddhism is called the near enemy. For example, the near enemy of equanimity is apathy; the near enemy of quality is functionality; and the near enemy of Six Sigma is definitional justification through defects of opportunity. What I would like to see is a profession that did a better job of teaching everyone how to distinguish for him- or herself between scholarship that moves things forward (truly improves the process and customer satisfaction) and scholarship that just shakes things up (a revolutionary program that changes the direction of our misunderstandings about customer satisfaction and organizational profitability – a true 100-fold improvement program). On a more subjective level, I would like to see greater emphasis placed on the ascesis, or self-transformation, that produces integrity, honesty, flexibility, and moral independence so that we are indeed free to tell the emperor that he is not wearing any clothes. Currently, we are in a state of limbo as a profession; because we are afraid to speak, our self-transformation has become a loss of self. A shift in this direction may happen in the next few years if for no other reason than that integrity, honesty, flexibility, and moral independence are qualities whose value comes into high relief during a time of “high stakes” and “great need.” I believe that the pressures of the current certification frenzy will converge with the pressures of an already latent dissent within the profession to produce some change, though whether the transformation will be more than superficial I cannot predict. I hope that part of the change will involve a revived conversation about what it is to be Six Sigma certified. These comments point to the craft of mindful listening that has been practiced all along in our profession (in training sessions, conferences, publications and so on) and in our intimate encounters with books alongside the more highly rewarded craft of argumentation that for the moment has gotten us into a trough. A first step in rethinking what we are about as a profession may be to stop focusing on outsmarting one another and to find ways of fostering the more intuitive and receptive dimensions of our communal and intellectual lives. Where this might lead methodologically I don’t know. But as a best-case scenario our profession may in time develop a culture that, without dispensing with traditional scholarship or critical theory, somehow uses interdisciplinary methodology as the basis for a complex exploration of the art and science of listening that is one of the creative forces in the world, a force, moreover, that our species would do well to cultivate if we want to have a chance of surviving. That may sound idealistic; part of me says that as long as this profession is invested in hierarchy, which it always will be, there will always exist a built-in spiritual dullness that is the opposite of listening. But most of us who decide to
SL316XCh14Frame Page 463 Monday, September 30, 2002 8:06 PM
Six Sigma Certification
463
become quality professionals specializing in Six Sigma methodology, however invested we are in institutional security and prestige, also do this kind of work because we had an experience early in our lives of being taught how to let go of whatever we thought was the whole of reality and to take the measure of a larger moral and human universe. Maybe Six Sigma certification is at one of those rare junctures when the costs of closing ourselves off within a world as defined by the disciplinary norms of the moment will come to seem unacceptably high. The debate over certification is likely to continue for some time. Arguments will be advanced for and against the adoption of certification. This is undoubtedly healthy because it involves a major policy issue in the quality profession. One hopes that this debate will be based, at least in part, on evidence rather than argumentation. If it is, one can be reasonably confident that it will lead to the adoption of sound policies. For right now, however, certification does not make sense.
REFERENCES Beaton, A., Martin, M., Mulls, I., Gonzalez, E., Smith, T., and Kelley, D. (1996a). Science achievement in the middle school years: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA. Beaton, A., Martin, M., Mulls, I., Gonzalez, E., Smith, T., and Kelley, D. (1996b). Mathematics achievement in the middle school years: IEA’s Third International Mathematics and Science Study (TIMSS). TIMSS International Study Center, Boston College, Chestnut Hill, MA. Grant, S., Peterson, P., and Shojgreen-Downer, A. (1996). Learning to teach mathematics in the context of system reform. American Educational Research Journal, 33(2), 500–541. Madaus, G. F., West, M. M., Harmon, M. C., Lomax, R. G., and Viator, K. A. (1992). The influence of testing on teaching math and science in grades 4–12. Center for the Study of Testing, Evaluation, and Educational Policy. Boston College. Chestnut Hill, MA. McMurtrie, B. (January 12, 2001). Regional accreditors punish colleges rarely and inconsistently. The Chronicle of Higher Education. Pp. A27-A30. Mullis, I., Dossey, J., Campbell, J., Gentile, C., O’Sullivan, C., and Latham, A. (1994). Trends in academic progress. National Assessment of Educational Progress, Educational Testing Service, Princeton. Nelson, N., Roenthal, R., and Rosnow, R. L. (1986). Interpretation of significance levels and effect sizes by psychological researchers. American Psychologist. 41, 1299–1301. Oakes, M. (1986). Statistical inference: a commentary for the social and behavioral sciences. John Wiley & Sons, New York. Rosenthal, R. and Gaito, J. (1963). The interpretation of level of significance by psychological researchers. Journal of Psychology. 55, 33–38. Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review. 24, 86–97. Slavin, R. (1997). Design competition: a proposal for a new federal role in educational research and development. Educational Researcher, 26(1), 22–27.
SL316XCh14Frame Page 464 Monday, September 30, 2002 8:06 PM
464
Six Sigma and Beyond: The Implementation Process
Whitford, B. L. and Jones, K. (2000). How high stakes school accountability undermines a performance based curriculum vision. In Accountability, Assessment and Teacher Commitment: Lessons from Kentucky’s Reform Efforts. Whitford, B. L. and Jones, K. (Eds.). State University of New York Press. Albany, NY. Zuckerman, M., Hodgin, H. S., Zuckerman, A., and Rosenthal, R. (1993). Contemporary issues in the analysis of data: a survey of 551 psychologists. Psychological Science. 4, 49–53.
SL316XCh15EpilogFrame Page 465 Monday, September 30, 2002 8:05 PM
Epilog Let me close this volume and the series of Six sigma and beyond with some analogies from the movie world. They remind us of strength, mighty power, indestructibility, intuition and so many other attributes that allegedly were build into the said movies. In the end, though they all turned out to be little less than perfect. So, and with the six sigma methodology (both the DMAIC and DFSS model). The intent is great. The expected results are phenomenal. Yet, if we do not plan appropriately, design for customer satisfaction and test for validation, we are going to be just as short sighted as the movies. Overconfident for nothing! Titanic: The boat that would never sink. We see the director who says he's king of the world. Market translation: There are never enough life boats/jackets when the inevitable iceberg is struck by a crazed captain. Lots of companies will drown or swamp life boats that could save a few. Supplier strategy: Get in a lifeboat by hook or crook and be prepared to keep others out. User strategy: Make sure your picks are in a lifeboat or be prepared to have belly up technology. In the case of the six sigma methodology, make sure that it is for you. If not, move on. Not everything has to be six sigma! The Producers: Two con artists with the intent to produce a flop attempt to swindle little old ladies but wind up producing a winner with no way to pay off all the investors. Everyone goes to jail. Market translation: Actually, it's the opposite of this with the intent to produce a winner but actually producing a loser with nobody going to jail and all monies squandered. Not nearly as funny as the show. Supplier strategy: Most of the little old ladies have wised up, so you will have to show them an actual business plan. User/Venture Capitalist strategy: Ask for references, do your due diligence. In the case of the six sigma methodology, make sure the ROI is present for your organization. Just because six sigma is the BUZZ phrase, that does not mean that is also the silver bullet that will cure ALL your organization’s ailments. Be careful! Field of Dreams: Voice tells corn farmer, "If you build it, he will come." Farmer builds baseball diamond, gets really cool supernatural game of baseball going, and almost loses the farm. Market translation: If you build it, they probably will not come. There needs to be a real market or need (like reconciling with your father). Only a few visionaries ever make it. Supplier strategy: Do your homework and don't believe all the consultants and market analysts you pay to tell you what you want to hear. User strategy: Ditto. In the case of the six sigma methodology, make sure your xs are correlated with the real Ys. Otherwise, you are investing in wishful thinking. Remember: focus on functionality and not specifications. Satisfaction is a derivative of this functionality and not specifications. The Nightmare Before Christmas: The Pumpkin King thinks that Christmas is better than Halloween and attempts a hostile takeover. Learns that a space you know is better than one you don't. Market translation: The real winners in any space 465
SL316XCh15EpilogFrame Page 466 Monday, September 30, 2002 8:05 PM
466
Six Sigma and Beyond: The Implementation Process
have deep domain expertise and really understand/know their customers. Supplier strategy: Know what you know and focus on a market/space where you can execute better than anyone else. User strategy: Don't accept gifts/products from scary people. In the case of the six sigma methodology do not accept the methodology just because other people and organizations “swear” by it. In your organization traditional tools and methodologies may work “miracles” just as good as the six sigma approach. Investigate thoroughly, before you decide. Good Will Hunting: Brilliant but angry young man addresses deep-seated insecurities while learning to love and becoming domesticated. Kind-hearted but strong-willed mentor brings him along. Market translation: Lots of smart kids spent a lot of dumb money as no one was there to show them how to do it right. Shame on the top executives and boards of directors. Supplier strategy: Promise you'll never do it again and say that you really do learn a lot after flushing your first $100 million. User strategy: What did you REALLY expect? In the case of the six sigma methodology, do not forget than even the GREAT companies that allegedly started and perfected this particular methodology (Six Sigma) have had their ups and downs. Do not imitate anyone, just because they are bigger and they talk a good PR. Imitate the doers, the ones that have a track record for success and not the ones that create “fads” and “buzz words” to confuse and twist the real truth of improvement. How can anyone believe that 6 sigma pays off when companies still have record recalls, record dissatisfaction from customer surveys and market share loss? Thus, make sure your hunting is an intelligent one and do not be swayed by the sirens and trumpets of the consultants and executives who want your money and offer excuses for failures. And, one of my all-time favorite movies/lessons: Modern Times: A classic film illustrating the power of technology and the potential for abuse and likely dehumanization. No words, but the pictures and Chaplin say it all. Market translation: Technology is powerful and potentially destructive. Really understand what you are getting yourselves into when you sign up for it. Supplier strategy: Understand the scope of your efforts and sell appropriately. User strategy: Be careful, it's dangerous out there. In the case of the six sigma methodology, be very mindful of the technology associated with this methodology. There is an aura of magic that if one is going to use a particular software and a particular analysis with a three dimension graph attached to it, it must be the correct answer. If the result was generated with advanced mathematical techniques/modeling using the Greek alphabet at least twice over, it must be the correct answer. That may no be so. Remember, however, that sometime the more we change the more we stay the same. We may change names, but the essence is the same. Do not be fooled by jargon, a slick sales – force and or consultants that try to convince you that you are missing something. You may not be missing anything at all. In fact, you may be the best!
SL316XCh16GlossaryFrame Page 467 Monday, September 30, 2002 8:02 PM
Glossary In the pursuit of implementing Six Sigma and identifying, selecting, and working with a project within an organization, the following summary of words should be familiar in the vocabulary of the Six Sigma professional. We have tried to include words that follow pretty much the nine areas of project management (PM), which are: 1. 2. 3. 4. 5. 6. 7. 8. 9.
PM framework Scope Quality Time Cost Risk Human Resources Communications Contract–Procurement Management
The reader will notice that some words have special meaning when used in the context of PM. For example, change, commitment, variance, work plan, and many others. α (alpha) risk — the maximum probability of saying a process or lot is unacceptable when, in fact, it is acceptable. Acceptance sampling — sampling inspection in which decisions concerning acceptance or rejection of materials or services are made; also includes procedures used in determining the acceptability of the items in question based on the true value of the quantity being measured. Accountability/responsibility matrix — a structure that relates the project organization structure to the work breakdown structure; assures that each element of the project scope of work is assigned to a responsible individual. Accuracy (of measurement) — difference between the average result of a measurement with a particular instrument and the true value of the quantity being measured. Acquisition control — a system for acquiring project equipment, material, and services in a uniform and orderly fashion. Acquisition evaluations — review and analysis of responses to determine a supplier’s ability to perform work as requested. This activity may include an evaluation of the supplier’s financial resources, ability to comply with technical criteria and delivery schedules, satisfactory record of performance, and eligibility for award.
467
SL316XCh16GlossaryFrame Page 468 Monday, September 30, 2002 8:02 PM
468
Six Sigma and Beyond: The Implementation Process
Acquisition methods — the various ways by which goods and services are acquired from suppliers. Acquisition negotiations — contracting without formal advertising. This method offers flexible procedures, permits bargaining, and provides an opportunity to prospective suppliers to revise their offers before the award. Acquisition process — the process of acquiring personnel, goods, or services for new or existing work within the general definitions of contracts requiring an offer and acceptance, consideration, lawful subject matter, and competent parties. Active listening — standard techniques of active listening are to pay close attention to what is said, to ask the other party to spell out carefully and clearly what they mean, and to request that ideas be repeated if there is any ambiguity or uncertainty. Activity — a task or series of tasks performed over a period of time. Activity description — any combination of characters that easily identifies an activity to any recipient of the schedule. Activity duration — the best estimate of the time (hours, days, weeks, months, etc.) necessary for the accomplishment of the work involved in an activity, considering the nature of the work and resources needed for it. Actual cost of work performed (ACWP) — the direct costs actually incurred and the indirect costs applied in accomplishing the work performed within a given time period. These costs should reconcile with a contractor’s incurred cost ledgers that are regularly audited by the client. Actual finish date — the calendar date work actually ended on an activity. It must be prior or equal to the data date. The remaining duration of this activity is zero. Actual start date — the calendar date work actually began on an activity. It must be prior or equal to the data date. Addendum — see procurement addendum. ADM — see Arrow Diagramming Method. Agency — a legal relationship by which one party is empowered to act on behalf of another party. Agreement, legal — a legal document that sets out the terms of a contract between two parties. Alpha risk — see producer’s risk. Alternative analysis — breaking down a complex scope situation for the purpose of generating and evaluating different solutions and approaches. Alternatives — review of the means available and the impact of trade-offs to attain the objectives. Amount at stake — the extent of adverse consequences that could occur on the project. Analysis — the study and examination of something complex and the separation of it into its more simple components. Analysis typically includes discovering not only the parts of the thing being studied but also how they fit together and why they are arranged in this particular way. A study of schedule variances for cause, impact, corrective action, and results.
SL316XCh16GlossaryFrame Page 469 Monday, September 30, 2002 8:02 PM
Glossary
469
Analysis of means (ANOM) — a statistical procedure for troubleshooting industrial processes and analyzing the results of experimental designs with factors at fixed levels. It provides a graphical display of data. Ellis R. Ott developed the procedure in 1967 because he observed that nonstatisticians had difficulty understanding analysis of variance. Analysis of means is easier for quality practitioners to use because it is an extension of the control chart. In 1973, Edward G. Schilling further extended the concept, enabling analysis of means to be used with normal distributions and attributes data where the normal approximation to the binomial distribution does not apply. This is referred to as “analysis of means for treatment effects.” Analysis of variance (ANOVA) — a basic statistical technique for analyzing experimental data. It subdivides the total variation of a data set into meaningful component parts associated with specific sources of variation in order to test a hypothesis on the parameters of the model or to estimate variance components.There are three models: fixed, random, and mixed. Analytical thinking — breaking down a problem or situation into discrete parts to understand how each part contributes to the whole. Apparent low bidder — the contractor who has submitted the lowest compliant bid for all or part of a project as described in a set of bid documents. Application — an act of putting to use (new techniques); an act of applying techniques. Appraisal costs — costs incurred to determine the degree of conformance to quality requirements. Approve — to accept as satisfactory. Approval implies that the thing approved has the endorsement of the approving agency; however, the approval may still require confirmation by somebody else. In management use, the important distinction is between approve and authorize. Persons who approve something are willing to accept it as satisfactory for their purposes, but this decision is not final. Approval may be by several persons. The person who authorizes has final organization authority. This authorization is final approval. Approved bidders list — a list of contractors that have been pre-qualified for purposes of submitting competitive bids. Approved changes — changes that have been approved by higher authority. Arbitration — a formalized system for dealing with grievances and administering corrective justice as part of collective bargaining agreements. Archive tape — a computer tape that contains historical project information. Area of project application — the environment in which a project takes place, with its own particular nomenclature and accepted practices, e.g., facilities, products, or systems development projects, to name a few. Arrow diagramming method — the graphic presentation of an activity. The tail of the arrow represents the start of the activity. The head of the arrow represents the finish. Unless a time scale is used, the length of the arrow stem has no relation to the duration of the activity. Length and direction of the activity are usually a matter of convenience and clarity.
SL316XCh16GlossaryFrame Page 470 Monday, September 30, 2002 8:02 PM
470
Six Sigma and Beyond: The Implementation Process
As-built schedule — the final project schedule, which depicts actual completion (finish) dates, actual duration dates, and start dates. As-performed schedule — the final project schedule, which depicts actual completion (finish) dates, actual durations, and start dates. Assurance — examination with the intent to verify. Attribute data — go/no-go information. The control charts based on attribute data include fraction defective chart, number of affected units chart, count chart, count-per-unit chart, quality score chart, and demerit chart. Audits — a planned and documented activity performed by qualified personnel to determine by investigation, examination, or evaluation of objective evidence the adequacy and compliance with established procedures, or the applicable documents, and the effectiveness of implementation. Authorize — to give final approval. A person who can authorize something is vested with authority to give final endorsement, which requires no further approval. Authorized work — an effort that has been approved by higher authority and may or may not be defined or finalized. Availability — the ability of a product to be in a state to perform its designated function under stated conditions at a given time. Availability can be expressed by the ratio: uptime/[uptime + downtime] Average — see mean. Average outgoing quality limit (AOQL) — the maximum average outgoing quality over all possible levels of incoming quality for a given acceptance sampling plan and disposal specification. Backward pass — calculation of late finish times (dates) for all uncompleted network activities. Determined by working backwards through each activity. Balanced scorecard — translates an organization’s mission and strategy into a comprehensive set of performance measures to provide a basis for strategic measurement and management, utilizing four balanced views: financial, customers, internal business processes, and learning and growth. Baseline — management plan and/or scope document fixed at a specific point in time in the project life cycle. Baseline concept — management’s project management plan for a project fixed prior to commencement. Benefits administration — the formal system by which an organization manages its nonfinancial commitment to its employees; includes such benefits as vacation, leave time, and retirement. Best and final contract offer — final offer by the supplier to perform the work after incorporating negotiated and agreed changes in the procurement documents. β (beta) risk — the maximum probability of saying a process or lot is acceptable when, in fact, it should be rejected. See also consumer’s risk.
SL316XCh16GlossaryFrame Page 471 Monday, September 30, 2002 8:02 PM
Glossary
471
Bayes theorem — a theorem of statistics relating conditional probabilities. Bell-shaped curve — a curve or distribution showing a central peak and tapering off smoothly and symmetrically to “tails” on either side. A normal (Gaussian) curve is an example. Bias (in measurement) — a characteristic of measurement that refers to a systematic difference. That systematic difference is an error that leads to a difference between the average result of a population of measurements and the true, accepted value of the quantity being measured. Bid — an offer to perform the work described in a set of bid documents at a specified cost. Bid cost considerations — consideration of suppliers’ approach and reasonableness of cost, cost realism, and forecast of economic factors affecting cost and cost risks used in the cost proposal. Bid documents — a set of documents issued for purposes of soliciting bids in the course of the acquisition process. Bid evaluation — review and analysis of response to determine a supplier’s ability to perform the work as requested. This activity may include an evaluation of supplier’s financial resources, ability to comply with technical criteria and delivery schedules, satisfactory record of performance, and eligibility for award. Bid list — a list of suppliers invited to submit bids for goods and services as specified. Bid protests — the process by which an unsuccessful supplier may seek remedy for unjust contract awards. Bid response — communications, positive or negative, from prospective suppliers in response to the invitation to bid. Bid time consideration — evaluation of suppliers’ offer with regard to dates identified for completion of phases of the work or total work. Bid technical consideration — suppliers’ technical competency, understanding of technical requirements, and capability to produce technically acceptable goods or services. Generally, this evaluation ranks highest among all other evaluations. Bimodal distribution — a frequency distribution that has two peaks. Usually an indication of samples from two processes incorrectly analyzed as a single process. Binomial distribution (probability distribution) — given that a trial can have only two possible outcomes (yes/no, pass/fail, heads/tails), of which one outcome has probability p and the other probability q (p + q = 1), the probability that the outcome represented by p occurs r times in n trials is given by the binomial distribution. Breakdown — identification of the smallest activities or tasks in a job according to a defined procedure. Break-even chart — a graphic representation of the relation between total value earned and total costs for various levels of productivity. Break-even point — the productivity point at which value earned equals total cost.
SL316XCh16GlossaryFrame Page 472 Monday, September 30, 2002 8:02 PM
472
Six Sigma and Beyond: The Implementation Process
Budget — when unqualified, usually refers to an estimate of funds planned to cover a fiscal period. (See project budget.) Also, a planned allocation of resources. Budget costs — the translation of the estimate into manhour rates, quantity units of production, etc. so that these budget costs can be compared to actual costs and variances developed to highlight performance and those responsible to implement corrective action may be alerted, if necessary. Budget estimate (–10, +25%) — a budget estimate is prepared from flow sheets, layouts, and equipment details. This is often required for the owner’s budget system. These estimates are established based on quantitative information and are a mixture of firm and unit prices for labor, material, and equipment. In addition, they establish the funds required and are used for obtaining approval for the project. Other terms used to identify a budget estimate include appropriation, control, design, etc. Budgeted cost of work performed (BCWP) — the sum of the budgets for completed portions of in-process work, plus the appropriate portion of the budgets for level of effort and apportioned effort for the relevant time period. BCWP is commonly referred to as “earned value.” Budgeted cost of work scheduled (BCWS) — the sum of the budgets for work scheduled to be accomplished (including in-process work), plus the appropriate portion of the budgets for level of effort and apportioned effort for the relevant time period. Bulk material — material bought in lots; generally, no specific item is distinguishable from any other in the lot. These items can be purchased from a standard catalog description and are bought in quantity for distribution as required. Calendar — the calendar used in developing a project plan. This calendar identifies project workdays and can be altered so weekends, holidays, weather days, etc., are not included. Calendar range — the span of the calendar from the calendar start date through the last calendar unit performed. The calendar start date is unit number one. Calendar start date — the first calendar unit of the working calendar. Calendar unit — the smallest unit of the calendar produced. This unit is generally in hours, days, or weeks; it can also be grouped in shifts. Camp-Meidell conditions — for frequency distribution and histograms: a distribution is said to meet Camp-Meidell conditions if its mean and mode are equal and the frequency declines continuously on either side of the mode. Career path planning — the process of integrating an individual’s career planning and development into an organization’s personnel plans with the objective of satisfying both the organization’s requirements and the individual’s career goals. Cash flow analysis — the activity of establishing cash flow (dollars in and out of the project) by month and the accumulated total cash flow for the
SL316XCh16GlossaryFrame Page 473 Monday, September 30, 2002 8:02 PM
Glossary
473
project for the measurement of actual vs. budget costs. This is necessary to allow for funding of the project at the lowest carrying charges and is a method of measuring project progress. Cell — a layout of workstations and/or various machines for different operations (usually in a U-shape) in which multitasking operators proceed, with a part, from machine to machine, to perform a series of sequential steps to produce a whole product or major subassembly. Central tendency — the propensity of data collected on a process to concentrate around a value situated midway between the lowest and highest values. Central limit theorem — if samples of a population with size n are drawn, and the values of X-bar are calculated for each sample group, and the distribution of X-bar is found, the distribution’s shape is found to approach a normal distribution for sufficiently large n. This theorem allows one to use the assumption of a normal distribution when dealing with X-bar. “Sufficiently large” depends on the population’s distribution and what range of X-bar is being considered; for practical purposes, the easiest approach may be to take a number of samples of a desired size and see if their means are normally distributed. If not, the sample size should be increased. Central tendency — a measure of the point about which a group of values is clustered; some measures of central tendency are mean, mode, and median. Chaku-chaku — (Japanese) meaning “load-load” in a cell layout where a part is taken from one machine and loaded into the next. Characteristic — a dimension or parameter of a part that can be measured and monitored for control and capability. Change — an increase or decrease in any of the project characteristics. Change in Scope — a change in objectives, work plan, or schedule that results in a material difference from the terms of an approval to proceed previously granted by higher authority. Under certain conditions (normally so stated in the approval instrument), change in resource application may constitute a change in scope. Change order/purchase order amendment — written order directing the contractor to make changes according to the provisions of the contract documents. Changed conditions (contract) — a change in the contract environment, physical or otherwise, compared to that contemplated at the time of bid. Checklist — a tool used to ensure that all important steps or actions in an operation have been taken. Checklists contain items that are relevant to an issue or situation. Checklists are often confused with check sheets and data sheets (see also check sheet). Check sheet — a sheet for the recording of data on a process or its product. The check sheet is custom designed for a particular use to remind the user to record each piece of information required for a particular study and to
SL316XCh16GlossaryFrame Page 474 Monday, September 30, 2002 8:02 PM
474
Six Sigma and Beyond: The Implementation Process
reduce the likelihood of errors in recording data. Furthermore, a good check sheet will aid the researcher in interpreting the results. The data from the check sheet can be typed into a computer for analysis when the data collection is complete. The check sheet is one of the seven tools of quality. Check sheets are often confused with data sheets and checklists (see also checklist). Chi-square (χ2) — as used for goodness of fit: a measure of how well a set of data fits a proposed distribution such as the normal distribution. The data is placed into classes, and the observed frequency (0) is compared to the expected frequency (E) for each class of the proposed distribution. The result for each class is added to obtain a chi-square value. This is compared to a critical chi-square value from a standard table for a given α (alpha) risk and degrees of freedom. If the calculated value is smaller than the critical value, we can conclude that the data follows the proposed distribution at the chosen level of significance. Chronic condition — long-standing adverse condition that requires resolution by changing the status quo. For example, actions such as revising an unrealistic manufacturing process or addressing customer defections can change the status quo and remedy the situation. Client quality services — the process of creating a two-way feedback system to define expectations, opportunities, and anticipated needs. Close-out (phase) — see project close-out. Cluster — for control charts: a group of points with similar properties. Usually an indication of short duration, assignable causes. Code of accounts — once the project has been divided into the WBS Work Packages, a code or numbering system is assigned to the cost data for cost monitoring, control, reports, tax class separations, and forecasting purposes. Commissioning — activities performed for the purpose of substantiating the capability of the project to function as designed. Commitment — an agreement to consign or reserve the necessary resources to fulfill a requirement until expenditure occurs. A commitment is an event. Common causes — those sources of variability in a process that are truly random, i.e., inherent in the process itself. Common causes of variation — causes that are inherent in any process all the time. A process that has only common causes of variation is said to be stable or predictable or in-control. Also called “chance causes.” Communicating with groups — the means by which the project manager conducts meetings, presentations, negotiations, and other activities necessary to convey the project’s needs and concerns to the project team and other groups. Communicating with individuals — involves all activities by which the project manager transfers information or ideas to individuals working on the project.
SL316XCh16GlossaryFrame Page 475 Monday, September 30, 2002 8:02 PM
Glossary
475
Communications management (framework) — the proper organization and control of information transmitted by whatever means to satisfy the needs of the project. It includes the processes of transmitting, filtering, receiving, and interpreting or understanding information using appropriate skills according to the application in the project environment. It is at once the master and the servant of a project in that it provides the means for interaction between the many disciplines, functions, and activities, both internal and external to the project, that together result in the successful completion of that project; conducting or supervising the exchange of information. Compensation and evaluation — the measurement of an individual’s performance and the financial payment provided to employees as a reward for their performance and as a motivator for future performance. Competence — a person’s ability to team and perform a particular activity. Competence generally consists of skill, knowledge, experience, and attitude components. Competitive analysis — the gathering of intelligence relative to competitors in order to identify opportunities or potential threats to current and future strategy. Completed activity — an activity with an actual finish date and no remaining duration. Computer cost applications — the computer assisted techniques to handle analysis and store the volume of data accumulated during the project life that are essential to the cost management function. The areas associated with cost management are: cost estimating database, computerized estimating, management reports, economic analysis, analysis of risk and contingency, progress measurements, productivity analysis and control, risk management, commitment accounting, and integrated project management information systems. Concept — an imaginative arrangement of a set of ideas. Concept (phase) — the first of four sequential phases in the generic project life cycle. Also known as idea, economics, feasibility, or prefeasibility phase. Conceptual development — a process of choosing or documenting the best approach to achieve project objectives. Conceptual project planning — the process of developing broad-scope project documentation from which the technical requirements, estimates, schedules, control procedures, and effective project management will all flow. Concerns — number of defects (nonconformities) found on a group of samples in question. Confidence interval — range within which a parameter of a population (e.g., mean, standard deviation, etc.) may be expected to fall, on the basis of measurement, with some specified confidence level. Confidence level — the probability set at the beginning of a hypothesis test that the variable will fall within the confidence interval. A confidence level of 0.95 is commonly used.
SL316XCh16GlossaryFrame Page 476 Monday, September 30, 2002 8:02 PM
476
Six Sigma and Beyond: The Implementation Process
Confidence limits — the upper and lower boundaries of a confidence interval. Configuration (baseline) control — a system of procedures that monitors emerging project scope against the scope baseline. Requires documentation and management approval on any change to the baseline. Conflict management — the process by which the project manager uses appropriate managerial techniques to deal with the inevitable disagreements, both technical and personal in nature, that develop among those working toward project accomplishment. Conflict resolution — to seek a solution to a problem, five methods in particular have been proven through confrontation, compromise, smoothing, forcing and withdrawal Confrontation — where two parties work together toward a solution of the problem. Compromise — both sides agree such that each wins or loses a few points. Smoothing — differences between two groups are played down, and the strong points of agreement are given the most attention. Forcing — the project manager uses his power to direct the solution. This is a type of win-lose agreement where one side gets its way and the other does not. Withdrawal — one or both sides withdraw from the conflict. Conformance (of product) — adherence to some standard of the product’s properties. The term is often used in attribute studies of product quality, i.e., a given unit of the product is either in conformance to the standard or it is not. Constant-cause system — a system or process in which the variations are random and are constant in time. Constraints — applicable restrictions that will affect the scope. Any factor that affects when an activity can be scheduled. (See restraint.) Consumer’s risk — the maximum probability of saying a process or lot is acceptable when, in fact, it should be rejected. Contingencies — specific provisions for unforeseeable elements of cost within the defined project scope; particularly important where previous experience relating estimates and actual costs has shown that unforeseeable events that will increase costs are likely to occur. If an allowance for escalation is included in the contingency, it should be a separate item, determined to fit expected escalation conditions for the project. Contingency allowances — specific provisions for unforeseen elements of cost within the defined project scope; particularly important where previous experience relating estimates and actual costs has shown that unforeseen events that will increase costs are likely to occur. If an allowance for escalation is included in the contingency, it should be as a separate item, determined to fit expected escalation conditions of the project.
SL316XCh16GlossaryFrame Page 477 Monday, September 30, 2002 8:02 PM
Glossary
477
Contingency plan — a plan that identifies key assumptions beyond the project manager’s control and their probability of occurrence. The plan identifies alternative strategies for achieving project success. Contingency planning — the establishment of management plans to be invoked in the event of specified risk events. Examples include the provision and prudent management of a contingency allowance in the budget, the preparation of alternative schedule activity sequences or “work-arounds,” emergency responses to reduce delays, and the evaluation of liabilities in the event of complete project shutdown. Continuous data — data for a continuous variable. The resolution of the value is only dependent on the measurement system used. Continuous variable — a variable which can assume any of a range of values; an example would be the measured size of a part. Continuous probability distribution — a graph or formula representing the probability of a particular numeric value of continuous (variable) data based on a particular type of process that produces the data. Contract — a binding agreement to acquire goods or services in support of a project. Contract administration — monitoring and control of performance, reviewing progress, making payments, recommending modifications, and approving a contractor’s actions to ensure compliance with contractual terms during contract execution. Contract award — the final outcome of the acquisition process in which generally the contract is awarded to one prospective supplier, through acceptance of a final offer generally either by issuing a purchase order or the signing of a legally binding contract formalizing the terms under which the goods or services are to be supplied. Contract award ranking — qualitative and/or quantitative determinations of prospective suppliers’ bid, tender, proposal, or quotation relative to each other measured against a common base. Contract closeout — contract closeout activities that assure that the contractor has fulfilled all contractual obligations and has released all claims and liens in connection with work performed. Contract dates — the dates specified in the contract that impact the project plan. Contract dispute — disagreement between the parties. This may occur during contract execution or at completion and may include misinterpretation of technical requirements and any terms and conditions or be due to changes not anticipated at the time of contract award. Contract documents — the set of documents that form the contract. Contract financial control — exercise of control over contract costs. Contract guarantee — a legally enforceable assurance of performance of a contract by a contractor. Contract negotiation — method of procurement where a contract results from a bid that may be changed through bargaining.
SL316XCh16GlossaryFrame Page 478 Monday, September 30, 2002 8:02 PM
478
Six Sigma and Beyond: The Implementation Process
Contractor — a person or organization that undertakes responsibility for the performance of a contract. Contractor claims release — certificate to release and hold harmless from future claims by the contractor. Contract order modifications — changes in a contract during its execution to incorporate new requirements or to handle contingencies that develop after contract placement. Changes may include price adjustments or changes in scope. Contractor’s performance evaluation — a comprehensive review of a contractor’s technical and cost performance and work delivery schedules. Contract performance control — control of work during contract execution. Contract preaward meetings — meetings with prospective suppliers before final award determination to aid ranking and finalize terms of agreement between parties. Contract-procurement management — the function through which resources (including people, plant, equipment, and materials) are acquired for the project (usually through some form of formal contract) in order to produce the end product. It includes the processes of establishing strategy, instituting information systems, identifying sources, selection, conducting proposal or tender invitation and award, and administering the resulting contract. Contract risk — the potential and consideration of risk in procurement actions. Generally the forces of supply and demand determine who should have the maximum risk of contract performance, but the objective is to place on the supplier the maximum performance risk while maintaining an incentive for efficient performance. In a fixed price contract, the supplier accepts a higher risk than in a cost type contract in which supplier’s risk is lowest. Contract risk analysis — analysis of the consequences and probabilities that certain undesirable events will occur and their impact on attaining contract and procurement objectives. Contract types — the various forms of contracts by which goods or services can be acquired. See cost plus fixed fee, cost plus incentive fee, cost plus percentage of cost, firm fixed price, fixed price plus incentive fee, and unit price contracts. Control — the exercise of corrective action as necessary to yield a required outcome consequent upon monitoring performance. Control chart — a basic tool that consists of a chart with upper and lower control limits on which values of some statistical measure for a series of samples or subgroups are plotted. It frequently shows a central line to help detect a trend of plotted values toward either control limit. It is used to monitor and analyze variation from a process to see whether the process is in statistical control. Control group — an experimental group which is not given the treatment under study. The experimental group that is given the treatment is compared to the control group to ensure any changes are due to the treatment applied.
SL316XCh16GlossaryFrame Page 479 Monday, September 30, 2002 8:02 PM
Glossary
479
Control limits — The limits within which the product of a process is expected (or required) to remain. If the process leaves the limits, it is said to be out of control. This is a signal that action should be taken to identify the cause and eliminate it, if possible. Note: control limits are not the same as tolerance limits. Control limits always indicate the “voice of the process,” and they are always calculated. Control system — A mechanism that reacts to the current project status in order to ensure accomplishment of project objectives. Corporate business life cycle — a life cycle that encompasses phases of policy planning and identification of needs before a project life cycle as well as product-in-service and disposal after the project life cycle. Corporate project strategy — the overall direction set by a corporation of which the project is a part and the relationship of specific procurement actions to these corporate directions. Corrective action (cost management) — the development of changes in plan and approach to improve the performance of the project. (Communications management) Measures taken to rectify conditions adverse to specified quality and, where necessary, to preclude repetition. Cost — cash value of project activity. Cost applications — the processes of applying cost data to other techniques that have not been described in the other processes. Cost budgeting — the process of establishing budgets, standards, and a monitoring system by which the investment costs of a project can be measured and managed, that is, the establishment of the control estimate. It is vital to be aware of the problems before the fact so that timely corrective action can be taken. Cost controls — the processes of gathering, accumulating, analyzing, reporting, and managing the costs on an on-going basis. Includes project procedures, project cost changes, monitoring actual versus budget, variance analysis, integrated cost/schedule reporting, progress analysis and corrective action. Cost effective — better value for money, or the best performance for the least cost. Cost estimating — the process of assembling and predicting the costs of a project. It encompasses the economic evaluation, project investment cost, and predicting or forecasting of future trends and costs. Cost forecasting — the activity of predicting future trends and costs within the project duration. These activities are normally marketing-oriented. However, such items as sales volume, price, and operating cost can affect the project profitability analysis. Items that affect the cost-management functions are: predicted time/cost, salvage value, etc. Cost management — the function required to maintain effective financial control of a project through the processes of evaluating, estimating, budgeting, monitoring, analyzing, forecasting, and reporting the cost information. Cost of poor quality (COPQ) — the costs associated with providing poor-quality products or services.
SL316XCh16GlossaryFrame Page 480 Monday, September 30, 2002 8:02 PM
480
Six Sigma and Beyond: The Implementation Process
Cost of quality (COQ) — costs incurred in assuring quality of a product or service. There are four categories of quality costs: internal failure costs (costs associated with defects found before delivery of the product or service), external failure costs (costs associated with defects found during or after product or service delivery), appraisal costs (costs incurred to determine the degree of conformance to quality requirements), and prevention costs (costs incurred to keep failure and appraisal costs to a minimum). Cost performance measurement baseline — the formulation of budget costs and measurable goals, (particularly time and quantities) for the purposes of comparison and analysis and forecasting future costs. Cost plus fixed fee (CPFF) contract — provides reimbursement of allowable costs plus a fixed fee, which is paid proportionately as the contract progresses. Cost plus incentive fee (CPIF) contract — provides the supplier for cost of delivered performance, plus a predetermined fee as a bonus for superior performance. Cost plus percentage of cost (CPPC) contract — provides reimbursement of allowable cost of services performed plus an agreed-upon percentage of the estimated cost as profit. Cost status — see Scope reporting. Counseling/coaching — the process of advising or assisting an individual concerning career plans, work requirements, or the quality of work performed. Covariance — a measure of whether two variables (x and y) are related (correlated). It is given by the formula: σxy = [Σ(x – Xbar) (y – Ybar)]/n – 1 where “n” is the number of elements in the sample. Crashing — action to decrease the duration of an activity or project by increasing the expenditure of resources. Criteria — a statement that provides objectives, guidelines, procedures, and standards that are to be used to execute the development, design, and implementation portions of a PROJECT. Critical activity — any activity on a critical path. Critical path — the series of interdependent activities of a project, connected end to end, that determines the shortest total length of the project. The critical path of a project may change from time to time as activities are completed ahead of or behind schedule. Critical path method (CPM) — a scheduling technique using precedence diagrams for graphic display of a work plan; the method used to determine the length of a project and to identify the activities that are critical to the completion of the project. Critical path network (CPN) — a plan for the execution of a project that consists of activities and their logical relationships to one another.
SL316XCh16GlossaryFrame Page 481 Monday, September 30, 2002 8:02 PM
Glossary
481
Critical to quality characteristic (CTQ) — a characteristic of a product, service, or information that is important to the customer. CTQs must be measurable in either a quantitative (e.g., 3.00 mg, etc.) or qualitative manner (correct/incorrect, etc.). Culture — the integrated pattern of human knowledge, belief, and behavior that depends upon the human capacity for learning and transmitting knowledge to succeeding generations. Current finish date — the current estimate of the calendar date when an activity will be completed. Current start date — the current estimate of the calendar date when an activity will begin. Customer/client personnel — those individuals working for an organization who will assume responsibility for the product produced by a project when the project is complete. Customer — anyone who receives a product, service, or information from an operation or process. The term is frequently used to describe “external” customers — those who purchase the manufactured products or services that are the basis for the existence of the business. However, “internal” customers, also important, are internal “customers” who receive the intermediate or internal products or services from internal “suppliers.” (See External customer and Internal customer). Customer delight — the result achieved when customer requirements are exceeded in ways the customer finds valuable. Customer loyalty/retention — the result of an organization’s plans, processes, practices, and efforts designed to deliver their services or products in ways that create retained and committed customers. Customer satisfaction — the result of delivering a product or service that meets customer requirements, needs, and expectations. Customer segmentation — the process of differentiating customers based on one or more dimensions for the purpose of developing a marketing strategy to address specific segments. Customer service — the activities of dealing with customer questions; also sometimes the department that takes customer orders or provides post-delivery services. Customer-supplier partnership — a long-term relationship between a buyer and supplier characterized by teamwork and mutual confidence. The supplier is considered an extension of the buyer’s organization. The partnership is based on several commitments. The buyer provides long-term contracts and uses fewer suppliers. The supplier implements quality assurance processes so that incoming inspection can be minimized. The supplier also helps the buyer reduce costs and improve product and process designs. Customer value — the market-perceived quality adjusted for the relative price of a product. Cycle — a recurring pattern.
SL316XCh16GlossaryFrame Page 482 Monday, September 30, 2002 8:02 PM
482
Six Sigma and Beyond: The Implementation Process
Data — facts presented in descriptive, numeric, or graphic form. Data application — the development of a database of risk factors, both for the current project and as a matter of historic record. Data collection — the gathering and recording of facts, changes, and forecasts for reporting and future planning. Data date (DD) — the calendar date that separates actual (historical) data from scheduled data. Data refinements — reworking or redefinition of logic or data that may have previously been developed in the planning subfunction as required for proper input of milestones, restraints, priorities, and resources. Date of acceptance — the date on which a client agrees to the final acceptance of a project. Commitments against the capital authorization cease at this time. This is an event. DCOV — the design for the Six Sigma model: define, characterize, optimize, and verify. Decision matrix — a matrix used by teams to evaluate problems or possible solutions. For example, after a matrix is drawn to evaluate possible solutions, the team lists them in the far left vertical column. Next, the team selects criteria to rate the possible solutions, writing them across the top row. Then, each possible solution is rated on a predetermined scale (such as 1 to 5) for each criterion and the rating recorded in the corresponding grid. Finally, the ratings of all the criteria for each possible solution are added to determine its total score. The total score is then used to help decide which solution deserves the most attention. Defect — any output of an opportunity that does not meet a defined specification; a failure to meet an imposed requirement on a single quality characteristic or a single instance of nonconformance to the specification. Definitive estimate (–5, + 10%) — a definitive estimate is prepared from welldefined data, specifications, drawings, etc. This category covers all estimate ranges from a minimum to a maximum definitive type. These estimates are used for bid proposals, bid evaluations, contract changes, extra work, legal claims, permits, and government approvals. Other terms associated with a definitive estimate include check, lump sum, tender, post contract changes, etc. Deflection — the act of transferring all or part of a risk to another party, usually by some form of contract. Deformation — the bending or distorting of an object due to forces applied to it. Deformation can contribute to errors in measurement if the measuring instrument applies enough force. Degrees of freedom (df) — the number of unconstrained parameters in a statistical determination. For example, in determining X-bar (the mean value of a sample of n measurements), the number of degrees of freedom, df, is n. In determining the standard deviation (STD) of the same population, df = n – 1 because one parameter entering the determination is eliminated. The STD is obtained from a sum of terms based on the
SL316XCh16GlossaryFrame Page 483 Monday, September 30, 2002 8:02 PM
Glossary
483
individual measurements, which are unconstrained, but the nth measurement must now be considered “constrained” by the requirement that the values add up to make X-bar. An equivalent statement means that one degree of freedom is “factored out” because the STD is mathematically indifferent to the value of X-bar. Delegating — the process by which authority is distributed from the project manager to an individual working on the project. Deliverable — a report or product of one or more tasks that satisfy one or more objectives and must be delivered to satisfy contractual requirements. Deming cycle — (see plan-do-check-act cycle) Demographics — variables among buyers in the consumer market, which include geographic location, age, sex, marital status, family size, social class, education, nationality, occupation, and income. Dependability — the degree to which a product is operable and capable of performing its required function at any randomly chosen time during its specified operating time, provided that the product is available at the start of that period. (Nonoperation-related influences are not included.) Dependability can be expressed by the ratio: time available/[time available + time required] Deployment — (to spread around) used in strategic planning to describe the process of cascading plans throughout an organization. Design — the creation of final approach for executing a project’s work. Design contract — a contract for design. Design control — a system for monitoring a project’s scope, schedule, and cost during the project’s design stage. Design of experiment (DOE) — a branch of applied statistics dealing with planning, conducting, analyzing, and interpreting controlled tests to evaluate the factors and noises that control the value of a parameter or group of parameters. There are two approaches to DOE: classical and the Taguchi approach. In both cases, however, the planning of an experiment to minimize the cost of data obtained and maximize the validity range of the results is the primary concern. Requirements for a good experiment include clear treatment comparisons, controlled fixed and experimental variables, and maximum freedom from systematic error. The experiments should adhere to the scientific principles of statistical design and analysis. Each experiment should include three parts: the experimental statement, the design, and the analysis. Examples of experimental designs include single-/multifactor block, factorial, Latin square and nested arrangements. Desired quality — the additional features and benefits a customer discovers when using a product or service that lead to increased customer satisfaction. If missing, a customer may become dissatisfied. Detail schedule — a schedule used to communicate the day-to-day activities to working levels on the project.
SL316XCh16GlossaryFrame Page 484 Monday, September 30, 2002 8:02 PM
484
Six Sigma and Beyond: The Implementation Process
Development (phase) — the second of four sequential phases in the generic project life cycle. Also known as planning phase. Deviation — a nonconformance or departure of a characteristic from specified product, process, or system requirements. Diagnostic journey and remedial journey — a two-phase investigation used by teams to solve chronic quality problems. In the first phase, the diagnostic journey, the team moves from the symptom of a problem to its cause. In the second phase, the remedial journey, the team moves from the cause to a remedy. Dimensions of quality — the different ways in which quality may be viewed; for example, meaning of quality, characteristics of quality, drivers of quality, etc. Direct project costs — the costs directly attributable to a project, including all personnel, goods, and/or services together with all their associated costs, but not including indirect project costs, such as any overhead and office costs incurred in support of the project. Discrimination — the requirements imposed on the organization and the procedures implemented by the organization to assure fairness in hiring and promotion practices. Discrete variable — a variable that assumes only integer values; for example, the number of people in a room is a discrete variable. Discrete probability distribution — term used to signify that the measured process variable takes on a finite or limited number of values; no other possible values exist. Discussion — dialogue explaining implications and impacts on objectives. The elaboration and description of facts, findings, and alternatives. Dispersion (of a statistical sample) — the tendency of the values of elements in a sample to differ from each other. Dispersion is commonly expressed in terms of the range of the sample (difference between the lowest and highest values) or by the standard deviation. Dispersion analysis diagram — a cause and effect diagram for analysis of the various contributions to variability of a process or product. The main factors contributing to the process are first listed, then the specific causes of variability from each factor are enumerated. A systematic study of each cause can then be performed. Display — a pictorial, verbal, written, tabulated, or graphical means of transmitting findings, results, and conclusions. Disposition of nonconformity — action taken to deal with an existing nonconformity; an action may include correcting (repairing), reworking, regrading, scrapping, obtaining a concession, or amending a requirement. Dispute — disagreements not settled by mutual consent that could be decided by litigation or arbitration. Dissatisfiers — those features or functions that the customer or employee has come to expect and whose absence would result in dissatisfaction.
SL316XCh16GlossaryFrame Page 485 Monday, September 30, 2002 8:02 PM
Glossary
485
Distribution — the amount of potential variation in outputs of a process; it is usually described in terms of its shape, average, and standard deviation. Distribution (of communications) — the dissemination of information for the purpose of communication, approval, or decision-making. DMAIC — methodology used in the classical Six Sigma approach: define, measure, analyze, improve, control. Document control — a system for controlling and executing project documentation in a uniform and orderly fashion. Documentation — the collection of reports, user information, and references for distribution and retrieval, displays, back-up information, and records pertaining to the project. Dodge–Romig sampling plans — plans for acceptance sampling developed by Harold E. Dodge and Harry G. Romig. Four sets of tables were published in 1940: single-sampling lot tolerance tables, double-sampling lot tolerance tables, single-sampling average outgoing quality limit tables, and double-sampling average outgoing quality limit tables. Defects per unit (DPU) — the number of defects counted, divided by the number of “products” or “characteristics” (units) produced. Defects per million opportunities (DPMO) — the number of defects counted, divided by the actual number of opportunities to generate that defect, multiplied by one million. Drivers of quality — include customers, products or services, employee satisfaction, total organizational focus. Dummy activity — an activity, always of zero duration, that is used to show logical dependency when an activity cannot start before another is complete, but that does not lie on the same path through the network. Normally, these dummy activities are graphically represented as a dashed line headed by an arrow. Early finish date (EF) — the earliest time an activity may be completed equal to the early start of the activity plus its remaining duration. Early start date (ES) — the earliest time any activity may begin as logically constrained by the network for a given data date. Earned value — a method of reporting project status in terms of both cost and time. It is the budgeted value of work performed regardless of the actual cost incurred. Economic evaluation — the process of establishing the value of a project in relation to other corporate standards/benchmarks, project profitability, financing, interest rates, and acceptance. Effective interest — the true value of interest rate computed by equations for compound interest rate for a 1-year period. Effort — the application of human energy to accomplish an objective. Eighty-twenty (80–20) rule — a term referring to the Pareto principle, which suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes.
SL316XCh16GlossaryFrame Page 486 Monday, September 30, 2002 8:02 PM
486
Six Sigma and Beyond: The Implementation Process
Employee relations — those formal activities and procedures used by an organization to administer and develop its workforce. Empowerment — a condition whereby employees have the authority to make decisions and take action in their work areas, within stated bounds and without prior approval. Endorsement — written approval. Endorsement signifies personal understanding and acceptance of the thing endorsed and recommends further endorsement by higher levels of authority, if necessary. Endorsement of commitment by a person invested with appropriate authority signifies authorization. See Approve, Authorize. End users — external customers who purchase products or services for their own use. English system — the system of measurement units based on the foot, the pound, and the second. Entity (or item) — that which can be individually described and considered: process, product, and organization, a system, person, or any combination thereof; totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. Environment (framework) — the combined internal and external forces, both individual and collective, that assist or restrict the attainment of project objectives. These could be business- or project-related or may be due to political, economic, technological, or regulatory conditions (communications management); the circumstances, objects, or conditions by which one is surrounded. Environmentally concerned — those individuals who align themselves with the views of various groups concerned with issues of protecting the environment. Equipment procurement — the acquisition of equipment or material to be incorporated into a project. Estimate — an evaluation of all the costs of the elements of a project or effort as defined by an agreed-upon scope. See order of magnitude estimate, budget estimate, and definitive estimate. Estimated cost to complete (ECC) — the remaining costs to be incurred to satisfy the complete scope of a project at a specific data date; the difference between the cost to date and the forecast final cost. Estimated final cost — see forecast final cost. Ethics — a code of conduct that is based on moral principles and that tries to balance what is fair for individuals with what is right for society. Event — an identifiable single point in time on a project, task, or group of tasks. EVOP — the process of adjusting variables in a process in small increments in search of a more optimum point on the response surface. Exception reporting — the process of documenting those situations where there are significant deviations from the quality specifications of a project. The assumption is made that the project will be developed within established boundaries of quality. When the process falls outside of those boundaries, a report is made on why this deviation occurred.
SL316XCh16GlossaryFrame Page 487 Monday, September 30, 2002 8:02 PM
Glossary
487
Exception reports — documentation that focuses its attention on variations of key control parameters that are critical rather than on those that are progressing as planned. Execution (phase) — see implementation (phase). Expectancy theory — a motivational theory that says that what people do is based on what they expect to gain from the activity. Expected quality — also known as basic quality; the minimum benefit a customer expects to receive from a product or service. Expenditure — the conversion of resources. An expenditure is an event. Conversions of resources may take more than one form: (1) exchange: conversion of title or ownership (e.g., dollars for materials) or (2) consumption: conversion of a liquid resource to a less recoverable state, e.g., expenditures of time, human resources, dollars to produce something of value; or the incorporation of inventoried materials into fixed assets. Experimental design — a formal plan that details the specifics for conducting an experiment, such as which responses, factors, levels, blocks, treatments, and tools are to be used. Explicit knowledge — the captured and recorded tools of the day, for example, procedures, processes, standards, and other such documents. Exponential distribution — a probability distribution mathematically described by an exponential function; used to describe the probability that a product survives a length of time t in service, under the assumption that the probability of a product failing in any small time interval is independent of time; a continuous distribution where data are more likely to occur below the average than above it; typically used to describe the break-in portion of the “bathtub” curve. External customer — a person or organization who receives a product, service, or information but is not part of the organization supplying it (see also Internal customer). External procurement sources — extra-firm sources, including industry contacts, market data, competitive intelligence, and regulatory information, that could aid procurement decision-making. F distribution — the distribution of F, the ratios of variances for pairs of samples; used to determine whether or not the populations from which two samples were taken have the same standard deviation. The F distribution is usually expressed as a table of the upper limit below which F can be expected to lie with some confidence level for samples of a specified number of degrees of freedom. F test — test of whether two samples are drawn from populations with the same standard deviation, with some specified confidence level. The test is performed by determining whether F, as defined above, falls below the upper limit given by the F distribution table. Facilitator — an individual who is responsible for creating favorable conditions that will enable a team to reach its purpose or achieve its goals by bringing together the necessary tools, information, and resources to get the job done.
SL316XCh16GlossaryFrame Page 488 Monday, September 30, 2002 8:02 PM
488
Six Sigma and Beyond: The Implementation Process
Facilities/product life cycle — a life cycle that encompasses phases of operation and disposal, as well as, and following, the project life cycle. Factor analysis — a statistical technique that examines the relationships between a single dependent variable and multiple independent variables. For example, it is used to determine which questions on a questionnaire are related to a specific question such as “Would you buy this product again?” Failure mode analysis (FMA) — a procedure used to determine which malfunction symptoms appear immediately before or after a failure of a critical parameter in a system. After all the possible causes are listed for each symptom, the product is designed to eliminate the problems. Failure mode effects analysis (FMEA) — a procedure in which each potential failure mode in every subitem of an item is analyzed to determine its effect on other subitems and on the required function of the item. Failure mode effects and criticality analysis (FMECA) — a procedure that is performed after a failure mode effects analysis to classify each potential failure effect according to its severity and probability of occurrence. Fast track — the starting or implementation of a project by overlapping activities, commonly entailing the overlapping of design and construction (manufacturing) activities. Failure rate — the average number of failures per unit time. Used for assessing reliability of a product in service. Fault tree analysis — a technique for evaluating the possible causes that may lead to the failure of a product. For each possible failure, the possible causes of the failure are determined; then, the situations leading to those causes are determined; and so forth, until all paths leading to possible failures have been traced. The result is a flow chart for the failure process. Plans to deal with each path can then be made. Feasibility — the assessment of capability of being completed; the possibility, probability, and suitability of accomplishment. Feasibility studies — the methods and techniques used to examine technical and cost data to determine the economic potential and the practicality of project applications. It involves the use of techniques such as the time value of money so that projects may be evaluated and compared on an equivalent basis. Interest rates, present worth factors, capitalization costs, operating costs, depreciation, etc., are all considered. Feasible project alternatives — reviews of available alternate procurement actions that could attain the objectives. Feedback (general) — information (data) extracted from a process or situation and used in controlling (directly) or in planning or modifying immediate or future inputs (actions or decisions) into a process or situation. Feedback (process) — using the results of a process to control it. The feedback principle has wide application. An example would be using control charts to keep production personnel informed on the results of a process. This allows them to make suitable adjustments to the process. Some form of feedback on the results of a process is essential in order to keep the process under control.
SL316XCh16GlossaryFrame Page 489 Monday, September 30, 2002 8:02 PM
Glossary
489
Feedback (teams) — the return of information in interpersonal communication; it may be based on fact or feeling and helps the party who is receiving the information judge how well she is being understood by the other party; more generally, information about the interaction process that is used to make decisions about its performance and to adjust the process when necessary. Feedback loops — pertain to open-loop and closed-loop feedback. Field cost — costs associated with a project site rather than with the home office. Figure of merit — generic term for any of several measures of product reliability, such as MTBF, mean life, etc. Filters — relative to human-to-human communication, those perceptions (based on culture, language, demographics, experience, etc.) that affect how a message is transmitted by the sender and how a message is interpreted by the receiver. Final completion — when the entire work has been performed to the requirements of the contract, except for those items arising from the provisions of warranty, and is so certified. Final payment — final settlement paid at contract completion of the contractually obligated amount including retention. Financial closeout — accounting analysis of how funds were spent on a project. Signifies a point in time when no further charges should be made “against” the project. Financial control — exercise of control on payments of supplier’s invoices. Financing — involves the techniques and methods related to providing the sources of monies and methods to raise funds (stock, mortgages, bonds, innovative financing agreements, leases, etc.) required for a project. Finding — a conclusion of importance based on observation. Fitness for use — a term used to indicate that a product or service fits a given customer’s defined purpose for that product or service. Firm fixed price (FFP) contract — a lump sum contract whereby the supplier agrees to furnish goods or services at a fixed price. Five whys — a persistent questioning technique to probe deeper to surface the root cause of a problem. Fixed price plus incentive fee (FPPIF) contract — provides the supplier with a fixed price for delivered performance plus a predetermined fee for superior performance. Float — see free float. Floating task — a task that can be performed earlier or later in the schedule without affecting the project duration. Flow chart — for programs, decision-making, process development, a pictorial representation of a process indicating the main steps, branches, and eventual outcomes of the process. Flowcharts are drawn to better understand processes. Also called “process map.” The flowchart is one of the seven tools of quality.
SL316XCh16GlossaryFrame Page 490 Monday, September 30, 2002 8:02 PM
490
Six Sigma and Beyond: The Implementation Process
Focus group — a qualitative discussion group consisting of eight to ten participants invited from a segment of the customer base to discuss an existing or planned product or service, lead by a facilitator working from predetermined questions (focus groups may also be used to gather information in a context other than in the presence of customers). Force-field analysis — a technique for analyzing the forces that aid or hinder an organization in reaching an objective. Forecast — an estimate and prediction of future conditions and events based on information and knowledge available at the time of the forecast. Forecast final cost — the anticipated cost of a project or component when it is complete. The sum of the committed cost to date and the estimated cost to complete. Forecasting — the work performed to estimate and predict future conditions and events. Forecasting is an activity of the management function of planning. Forecasting is often confused with budgeting, which is a definitive allocation of resources rather than a prediction or estimate. Formal bid — bid, quotation, letter, or proposal submitted by prospective suppliers in response to the request for proposal or request for quotation. Formal communication — the officially sanctioned data within an organization, which includes publications, memoranda, training materials and/or events, public relations information, and company meetings. Formative quality evaluation — the process of reviewing project data at key junctures during the project life cycle for a comparative analysis against preestablished quality specifications. This evaluation process is ongoing during the life of a project to ensure that timely changes can be made as needed to protect the success of the project. Forward pass — network calculations that determine the earliest start/finish time (date) of each activity. These calculations are from data date through the logical flow of each activity. Free float (FF) — the amount of time (in work units) an activity may be delayed without affecting the early start of the activity immediately following. Frequency distribution — for a sample drawn from a statistical population, the number of times each outcome was observed. Function (PM function) — the series of processes by which project objectives in that particular area of project management, e.g., scope, cost, time, etc., are achieved. Function–quality integration — the process of actively ensuring that quality plans and programs are integrated, mutually consistent, necessary, and sufficient to permit the project team to achieve the defined product quality. Functional organization — an organization organized by discrete functions, for example, marketing and sales, engineering, production, finance, human resources. Funding (cost management) — the status of internal fund allocation, or allocation by an external agency, if applicable, to enable payment for the performance of the project scope (contract/procurement management); the status of internal or external monies available for performing the work.
SL316XCh16GlossaryFrame Page 491 Monday, September 30, 2002 8:02 PM
Glossary
491
Future reality tree — a technique used in the application of Goldratt’s Theory of Constraints. Gantt chart — a type of bar chart used in process/project planning and control to display planned work and finished work in relation to time; also called a “milestone chart.” See bar charts. Gap analysis — a technique that compares a company’s existing state to its desired state (as expressed by its long-term plans) to help determine what needs to be done to remove or minimize the gap. Gatekeeping — the role of an individual (often a facilitator) in a group meeting in helping ensure effective interpersonal interactions (for example, ensuring that someone’s ideas are not ignored due to the team moving on to the next topic too quickly). General conditions — general definition of the legal relationships and responsibilities of the parties to the contract and how the contract is to be administered. They are usually standard for a corporation and/or project. General requirements — nontechnical specifications defining the scope of work, payments, procedures, implementation constraints, etc., pertaining to the contract. General sequencing — an overview of the order in which activities will be performed. Goal — a statement of general intent, aim, or desire; it is the point toward which management directs its efforts and resources; goals are often nonquantitative. Goodness of fit — any measure of how well a set of data matches a proposed distribution. Chi-square is the most common measure for frequency distributions. Simple visual inspection of a histogram is a less quantitative, but equally valid, way to determine goodness of fit. Government regulations and requirements — those laws, regulations, rules, policies, and administrative requirements imposed upon organizations by government agencies. Grand average — overall average of data represented on an X-bar chart at the time the control limits were calculated. Grapevine — the informal communication channels over which information flows within an organization, usually without a known origin of the information and without any confirmation of its accuracy or completeness (sometimes referred to as the “rumor mill”). Graph (quality management) — a visual comparison of variables that yield data in numerical facts. Examples include trend graphs, histograms, control charts, frequency distributions, and scatter diagrams (time management); the display or drawing that shows the relationship between activities; pictorial representation of relative variables. Group communication — the means by which a project manager conducts meetings, presentations, negotiations, and other activities necessary to convey a project’s needs and concerns to the project team and other groups.
SL316XCh16GlossaryFrame Page 492 Monday, September 30, 2002 8:02 PM
492
Six Sigma and Beyond: The Implementation Process
Guideline — a document that recommends methods to be used to accomplish an objective. Hammock — an aggregate or summary activity. All related activities are tied as one summary activity and reported at the summary level. Hanger — a break in a network path. Hierarchy structure — describes an organization that is organized around functional departments/product lines or around customers/customer segments and is characterized by top-down management (also referred to as a bureaucratic model or pyramid structure). Histogram — a graphic summary of variation in a set of data (frequency distribution). The range of the variable is divided into a number of intervals of equal size (called cells), and an accumulation is made of the number of observations falling into each cell. The histogram is essentially a bar graph of the results of this accumulation, i.e., frequency distribution. The pictorial nature of the histogram lets people see patterns that are difficult to see in a simple table of numbers. The histogram is one of the seven tools of quality. Historic records — project documentation that can be used to predict trends, analyze feasibility, and highlight problem areas/pitfalls on future similar projects. Historical data banks — the data stored for future reference and referred to on a periodic basis to indicate trends, total costs, unit costs, technical relationships, etc. Different applications require different database information. This data can be used to assist in the development of future estimates. Hold point — a point, defined in an appropriate document, beyond which an activity must not proceed without the approval of a designated organization or authority. Horizontal structure — describes an organization that is organized along a process or value-added chain, eliminating hierarchy and functional boundaries (also referred to as a systems structure). HR compensation and evaluation — the measurement of an individual’s performance and financial payment provided to employees as a reward for their performance and as a motivator for future performance. HR organization development — the use of behavioral science technology, research, and theory to change an organization’s culture to meet predetermined objectives involving participation, joint decision-making, and team building. HR performance evaluation — the formal system by which managers evaluate and rate the quality of subordinates’ performance over a given period of time. HR records management — the procedures established by the organization to manage all documentation required for the effective development and application of its work force. Human resources management (HRM) — the function of directing and coordinating human resources throughout the life of a project by applying the art and science of behavioral and administrative knowledge to achieve
SL316XCh16GlossaryFrame Page 493 Monday, September 30, 2002 8:02 PM
Glossary
493
predetermined project objectives of scope, cost, time, quality, and participant satisfaction. Hypergeometric distribution — a discrete (probability) distribution defining the probability of r occurrences in n trials of an event, when there are a total of d occurrences in a population of N. Impact analysis — the mathematical examination of the nature of individual risks on a project as well as potential structures of interdependent risks. It includes the quantification of their respective impact severity, probability, and sensitivity to changes in related project variables including the project life cycle. To be complete, the analysis should also include an examination of the external “status quo” prior to project implementation as well as the project’s internal intrinsic worth as a reference baseline. A determination should also be made as to whether all risks identified are within the scope of the project’s risk-response planning process. Impact interpretation — clarification of the significance of a variance with respect to overall objectives. Imperfection — a quality characteristic’s departure from its intended level or state without any association to conformance to specification requirements or to the usability of a product or service (see also defect and nonconformity). Implementation (phase) — the third of four sequential phases in the project life cycle. Also known as execution or operation phase. Implementation, completion of — also known as closeout phase. Completion of implementation means that the project team has (1) provided completed project activities in accordance with the project requirements and (2) completed project close out. Imposed date (external) — a predetermined calendar date set without regard to logical considerations of the network. Indirect project costs — all costs that do not form a part of the final project but that are nonetheless required for the orderly completion of the project and that may include, but not necessarily be limited to, field administration, direct supervision, incidental tools and equipment, startup costs, contractors’ fees, insurance, taxes, etc. Individuals outside the project — all those individuals who impact the project work but who are not considered members of the project team. Infant mortality rate— high failure rate that shows up early in product usage. Normally caused by poor design, manufacture, or other identifiable cause. Inflation/escalation — a factor in cost evaluation and cost comparison that must be predicted as an allowance to account for the price changes that can occur with time and over which the project manager has no control (for example: cost of living index, interest rates, other cost indices, etc.). Informal communication — the unofficial communication that takes place in an organization as people talk freely and easily; examples include impromptu meetings and personal conversations (verbal or e-mail). Information — data transferred into an ordered format that makes it usable and allows one to draw conclusions.
SL316XCh16GlossaryFrame Page 494 Monday, September 30, 2002 8:02 PM
494
Six Sigma and Beyond: The Implementation Process
Information system — technology-based systems used to support operations, aid day-to-day decision-making, and support strategic analysis (other names often used include: management information system, decision system, information technology [IT], data processing). Information flow (distribution list) — a list of individuals that would receive information on a given subject or project. Information gathering — researching, organizing, recording, and comprehending pertinent information/data. Information systems — a structured, interacting complex of persons, machines, and procedures designed to produce information that is collected from both internal and external sources for use as a basis for decision-making in specific contract/procurement activities. Initial Operation — see operation. In-progress activity — an activity that has been started but is not completed on a given date. Input limits — imposition of limitations to the resources through which the plan will be executed. Input milestones — imposed target dates or target events that are to be accomplished and that control the plan with respect to time. Input priorities — imposed priorities or sequence desired with respect to the scheduling of activities within previously imposed constraints. Input restraints — imposed external restraints, such as dates reflecting input from others and target dates reflecting output required by others, and such items as float allocation and constraints. In-service date — that point in time when the project is placed in a state of readiness or availability when it can be used for its specifically assigned function. Inspection — examination or measurement of work to verify whether an item or activity conforms to a specific requirement. Integrated cost/schedule reporting — the development of reports that measure actual vs. budget, “S” curves, BCWS, BCWP, and ACWP. Integrated project progress reports — documentation that measures actual (cost/schedule) vs. budget by utilizing BCWP, BCWS, and ACWP. Intelligence — the ability to learn or understand or to deal with new or trying situations. Intention for bid (IFB) — communications written or oral by the prospective organizations or individuals indicating their willingness to perform the specified work. This could be a letter, statement of qualifications, or response to a request for proposal/quotation. Interaction — mutual action or reciprocal action or influence. Interdependence — shared dependence between two or more items. Interest rate of return — see Profitability. Interface activity — an activity connecting anode in one subnet with a node in another subnet, representing logical interdependence. The activity identifies points of interaction or commonality between the project activities and outside influences.
SL316XCh16GlossaryFrame Page 495 Monday, September 30, 2002 8:02 PM
Glossary
495
Interface management — the management of communication, coordination, and responsibility across a common boundary between two organizations, phases, or physical entities that are interdependent. Interface program — a computer program that relates status system line items to their parent activities in the project plan. Intermediate customers — distributors, dealers, or brokers who make products and services available to the end user by repairing, repackaging, reselling, or creating finished goods from components or subassemblies. Internal project sources — intra-firm sources and records including historical data on similar procurements, cost and performance data on various suppliers, and other data that could assist in proposed procurements. Interpret — present in understandable terms. Interpretation — reduction of information to appropriate and understandable terms and explanations. Interrelationship digraph — a management and planning tool that displays the relationship between factors in a complex situation. It identifies meaningful categories from a mass of ideas and is useful when relationships are difficult to determine. Intervention — an action taken by a leader or a facilitator to support the effective functioning of a team or work group. Intervention intensity — the strength of an intervention by the intervening person; intensity is affected by words, voice inflection, and nonverbal behaviors. Inventory closeout — settlement and credit of inventory if purchased from project funds. Invitation to bid — the invitation issued to prospective suppliers to submit a bid, quotation, or proposal for the supply of goods or services. Involuntary — contrary to or without choice; not subject to control of the will (reflex). Internal rate of return (IRR) — a discount rate that causes net present value to equal zero. ISO — “equal” (Greek). A prefix for a series of standards published by the International Organization for Standardization. Inspection — the measuring, examining, testing, and gauging of one or more characteristics of a product or service and comparing the results with specified requirements to determine whether conformity is achieved for each characteristic. ISO 9000 series standards — a set of individual but related international standards and guidelines on quality management and quality assurance developed to help companies effectively document the quality system elements to be implemented to maintain an efficient quality system. The standards, initially published in 1987 and revised in 1994 and 2000, are not specific to any particular industry, product, or service. The standards were developed by the International Organization for Standardization, a specialized international agency for standardization composed of the national standards bodies of over 100 countries.
SL316XCh16GlossaryFrame Page 496 Monday, September 30, 2002 8:02 PM
496
Six Sigma and Beyond: The Implementation Process
ISO 14000-series — a set of standards and guidelines relevant to developing and sustaining an environmental management system. Jidoka — Japanese method of autonomous control involving the adding of intelligent features to machines to start or stop operations as control parameters are reached, and to signal operators when necessary. Job aid — any device, document, or other medium that can be provided to a worker to aid in correctly performing his tasks (e.g., laminated setup instruction card hanging on machine, photos of product at different stages of assembly, metric conversion table, etc.). Job descriptions (scope management) — documentation of a project participant’s job title, supervisor, job summary, responsibilities, authority and any additional job factors (human resources management). Written outlines of the skills, responsibilities, knowledge, authority, environment, and interrelationships involved in an individual’s job. Joint planning meeting — a meeting involving representatives of a key customer and the sales and service team for that account to determine how better to meet the customer’s requirements and expectations. Just-in-time training — job training coincidental with or immediately prior to its need for the job. Kaikaku — Japanese word meaning a breakthrough improvement in eliminating waste. Kaizen — a Japanese term that means gradual unending improvement by doing little things better and setting and achieving increasingly higher standards. The term was made famous by Masaaki Imai in his book Kaizen: The Key to Japan’s Competitive Success. Kaizen blitz/event — an intense, short-term team approach to employing the concepts and techniques of continuous improvement (for example, to reduce cycle time, increase throughput). Kano model — a representation of the three levels of customer satisfaction defined as dissatisfaction, neutrality, and delight. Key event schedule — a schedule comprised of key events or milestones. These events are generally critical accomplishments planned at time intervals throughout a project and used as a basis to monitor overall project performance. The format may be either network or bar chart and may contain minimal detail at a highly summarized level. This is often referred to as a milestone schedule. Key process input variable (KPIV) — an independent material or element, with descriptive characteristics, that is either an object (going into) or a parameter of a process (step) and that has a significant (key) effect on the output of the process. Key process output variable (KPOV) — a dependent material or element, with descriptive characteristics, that is the result of a process (step) that either is or significantly affects a customer’s CTQ. Knowledge management — involves the transformation of data into information, the acquisition or creation of knowledge, as well as the processes
SL316XCh16GlossaryFrame Page 497 Monday, September 30, 2002 8:02 PM
Glossary
497
and technology employed in identifying, categorizing, storing, retrieving, disseminating, and using information and knowledge for the purposes of improving decisions and plans. Kurtosis — a measure of the shape of a distribution. If the distribution has longer tails than a normal distribution of the same standard deviation, then it is said to have positive kurtosis (platykurtosis); if it has shorter tails, then it has negative kurtosis (leptokurtosis). Labor relations — those formal activities developed by an organization to negotiate and bargain with its workforce, whether or not that workforce is unionized. Lag — the logical relationship between the start or finish of one activity and the start or finish of another activity. Lag relationship — the four basic types of lag relationships between the start and/or finish of a work item and the start and/or finish of another work item are: 1. 2. 3. 4.
Finish to Start Start to Finish Finish to Finish Start to Start
Language — a systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, or gestures. Late finish (LF) — the latest time an activity may be completed without delaying the project finish date. Late start (LS) — the latest time an activity may begin without delaying the project finish date of the network. This date is calculated as the late finish minus the duration of the activity. Leader — an individual, recognized by others, as the person to lead an effort. One cannot be a “leader” without one or more “followers.” The term is often used interchangeably with manager. A leader may or may not hold an officially designated management-type position. Leadership (general) — an essential part of a quality improvement effort. Organization leaders must establish a vision, communicate that vision to those in the organization, and provide the tools, knowledge, and motivation necessary to accomplish the vision. Leadership (PM) — the process by which a project manager influences the project team to behave in a manner that will facilitate project goal achievement. Lean (agile) approach/lean (agile) thinking — (“lean” and “agile” may be used interchangeably) has a focus on reducing cycle time and waste using a number of different techniques and tools, for example, value stream mapping and identifying and eliminating “monuments” and nonvalue-added steps. Lean manufacturing — applying the lean approach to improving manufacturing operations.
SL316XCh16GlossaryFrame Page 498 Monday, September 30, 2002 8:02 PM
498
Six Sigma and Beyond: The Implementation Process
Learner-controlled instruction — (also called “self-directed learning”) a learning situation in which the learner works without an instructor, at her own pace, building mastery of a task (computer-based training is a form of LCI). Learning curve — the time it takes to achieve mastery of a task or body of knowledge. Put another way, it is a concept that recognizes the fact that productivity by workers improves as they become familiar with the sequence of activities involved in the production process. Learning organization — an organization that has as a policy to continue to learn and improve its products, services, processes and outcomes; “an organization that is continually expanding its capacity to create its future” (Senge). Legal tape — a computer tape that contains the contract base project plan as the first entry, and the (resource leveled) target project plan as the second entry. Also, all approved major changes to logic, time, or resources will be added, as a separate entity, to the legal tape. No other entries will be made to this tape. Legally concerned — those individuals who are concerned with assuring that the project complies with all aspects of the law. Leptokurtosis — for frequency distributions: a distribution that shows a higher peak and shorter “tails” than a normal distribution with the same standard deviation. Level finish/schedule (SF) — the date when an activity is scheduled to be completed using the resource allocation process. Level float — the difference between the level finish and the late finish date. Level of detail — a policy expression of content of plans, schedules, and reports in accordance with the scale of the breakdown of information. Level of effort (LOE) — support type effort (e.g., vendor liaison) that does not readily lend itself to measurement of discrete accomplishment. It is generally characterized by a uniform rate of activity over a specific period of time. Level start schedule (SS) — the date an activity is scheduled to begin using the resource allocation process. This date is equal to or later in time than early start. Life cycle — a product life cycle is the total time frame from product concept to the end of its intended use; a project life cycle is typically divided into five stages: concept, planning, design, implementation, evaluation, and close-out. Life cycle costing — the concept of including all costs within the total life of a project from concept, through implementation, startup to dismantling. It is used for making decisions between alternatives and is a term used principally by the government to express the total cost of an article or system. It is also used in the private sector by the real estate industry. Limitation of funds — the value of funds available for expenses beyond which no work could be authorized for performance during the specified period.
SL316XCh16GlossaryFrame Page 499 Monday, September 30, 2002 8:02 PM
Glossary
499
Line/functional manager — those responsible for activities in one of the primary functions of the organization, such as production or marketing, with whom a project manager must relate in achieving a project’s goals. Line item — the smallest unit of product whose status is tracked in a status system. Linearity — the extent to which a measuring instrument’s response varies with a measured quantity. Linear regression — the mathematical application of the concept of a scatter diagram where the correlation is actually a cause-and-effect relationship. Linear responsibility matrix — a matrix providing a three-dimensional view of project tasks, responsible person, and level of relationship. Lists, project — the tabulations of information organized in meaningful fashion. Listening post data — customer data and information gathered from designated “listening posts.” Logic — the interdependency of activities in a network. Long-term goals — goals that an organization hopes to achieve in the future, usually in 3 to 5 years. They are commonly referred to as strategic goals. Loop — a path in a network closed on itself passing through any node more than once on any given path. The network cannot be analyzed, as it is not a logical network. Lot — a defined quantity of product accumulated under conditions that are considered uniform for sampling purposes. Lot tolerance percent defective (LTPD) — see Consumer’s risk. Lot formation — the process of collecting units into lots for the purpose of acceptance sampling. The lots are chosen to ensure, as much as possible, that the units have identical properties, i.e., that they were produced by the same process operating under the same conditions. Lot tolerance percent defective (LTPD) — for acceptance sampling: expressed in percent defective units; the poorest quality in an individual lot that should be accepted; commonly associated with a small consumer’s risk. Macro environment — consideration, interrelationship, and action of outside changes such as legal, social, economic, political, or technological that may directly or indirectly influence specific project actions. Macro processes — broad, far-ranging processes that often cross functional boundaries. Macro procurement environment — consideration, interrelationship, and action of outside changes such as legal, social, economic, political, or technological that may directly or indirectly influence specific procurement actions. Maintainability — the probability that a given maintenance action for an item under given usage conditions can be performed within a stated time interval when the maintenance is performed under stated conditions using stated procedures and resources. Maintainability has two categories: serviceability — the ease of conducting scheduled inspections and servicing, and repairability — the ease of restoring service after a failure.
SL316XCh16GlossaryFrame Page 500 Monday, September 30, 2002 8:02 PM
500
Six Sigma and Beyond: The Implementation Process
Manager — an individual who manages and is responsible for resources (people, material, money, time); a person officially designated with a management-type position title. A manager is granted authority from above, whereas a leader’s role is derived by virtue of having followers. However, the terms manager and leader are often used interchangeably. Management — the process of planning, organizing, executing, coordinating, monitoring, forecasting, and exercising control. Management plan — a document that describes the overall guidelines within which a project is organized, administered, and managed to assure the timely accomplishment of project objectives. Management styles — a project manager may adopt several different management styles, according to circumstances, in the process of leadership and team motivation. These include: authoritarian, combative, conciliatory, disruptive, ethical, facilitating, intimidating, judicial, promotional, and secretive. Management styles include the following. Authoritarian — a style in which individuals know what is expected of them; the project manager gives specific guidance as to what should be done, makes his part of the group understood, schedules work to be done, and asks group members to follow standard rules and regulations. Combative — a style that is marked by an eagerness to fight or be disagreeable over any given situation. Conciliatory — a friendly and agreeable style; one that attempts to assemble and unite all project parties involved to provide a compatible working team. Disruptive — a style in which a project manager tends to break apart the unity of a group; the style of an agitator and one who causes disorder on a project. Ethical — the style of an honest and sincere project manager who is able to motivate and press for the best and fairest solution; one who generally goes “by the books.” Facilitating — a style in which a project manager is available to answer questions and give guidance when needed; he does not interfere with dayto-day tasks but rather maintains that status quo. Intimidating — a project manager with this style frequently reprimands employees for the sake of an image as a “tough guy,” at the risk of lowering department morale. Judicial — a style in which a project manager exercises the use of sound judgment or is characterized by applying sound judgment to most areas of the project. Promotional — a style that encourages subordinates to realize their full potential, cultivates a team spirit, and lets subordinates know that good work will be rewarded. Secretive — a style used by a project manager who is not open or outgoing in speech, activity, or purpose — much to the detriment of the overall project. Management time — manhours related to the project management team.
SL316XCh16GlossaryFrame Page 501 Monday, September 30, 2002 8:02 PM
Glossary
501
Managerial grid — a management theory developed by Robert Blake and Jane Mouton that maintains that a manager’s management style is based on his or her mind-set toward people; it focuses on attitudes rather than behavior. The theory uses a grid to measure concern with production and concern with people. Managerial quality administration — the managerial process of defining and monitoring policies, responsibilities, and systems necessary to retain quality standards throughout a project. Managerial reserves — the reserve accounts for allocating and maintaining funds for contingency purposes on over- or underspending on project activities. These accounts will normally accrue from the contingency and other allowances in the project budget estimate. Manpower planning — the process of projecting an organization’s manpower needs over time, in terms of both numbers and skills, and obtaining the human resources required to match the organization’s needs. See human resources management. Market-perceived quality — the customer’s opinion of an organization’s products or services as compared to those of the competitors. Master schedule — an executive summary-level schedule that identifies the major components of a project and usually also identifies the major milestones. Matrix — a two-dimensional structure in which the horizontal and vertical intersections form cells or boxes. In each cell may be identified a block of knowledge whose interface with other blocks is determined by its position in the structure. Matrix (statistics) — an array of data arranged in rows and columns. Matrix organization — a two-dimensional organizational structure in which the horizontal and vertical intersections represent different staffing positions, with responsibility divided between the horizontal and vertical authorities. Mean time between failures (MTBF) — mean time between successive failures of a repairable product. This is a measure of product reliability. Mean (of a statistical sample) (X-bar) — the arithmetic average value of some variable. The mean is given by a formula, where x is the value of each measurement in a sample. All xs are added together and divided by the number of elements (n) in the sample. Mean (of a population) (µ) — a measure of central tendency. The true arithmetic average of all elements in a population. X-bar approximates the true value of the population mean. Also known as the average. Means (in the Hoshin planning usage) — the step of identifying the ways by which multiyear objectives will be met, leading to the development of action plans. Mean time between failures (MTBF) — the average time interval between failures for repairable product for a defined unit of measure (e.g., operating hours, cycles, or miles).
SL316XCh16GlossaryFrame Page 502 Monday, September 30, 2002 8:02 PM
502
Six Sigma and Beyond: The Implementation Process
Measurement — the reference standard or sample used for the comparison of properties. Measurement accuracy — the extent to which the average result of a repeated measurement tends toward the true value of the measured quantity. The difference between the true value and the average measured value is called the instrument bias and may be due to such things as improper zero-adjustment, nonlinear instrument response, or even improper use of an instrument. Measurement error — the difference between the actual and measured value of a measured quantity. Measurement precision — the extent to which a repeated measurement gives the same result. Variations may arise from the inherent capabilities of an instrument, from variations of the operator’s use of the instrument, from changes in operating conditions, etc. Median (of a statistical sample) — the middle number or center value of a set of data when all the data are arranged in an increasing sequence. For a sample of a specific variable, the median is the point X˜ such that half the sample elements are below and the other half are above the median. Method — the manner or way in which work is done. When formalized into a prescribed manner of performing specified work, a method becomes a procedure. Metric — a standard of measurement. Metrology — the science of measurement. Micro environment — consideration of company-, project-, or client-imposed policies and procedures applicable to project actions. Micro procurement environment — consideration of company-, project-, or client-imposed policies and procedures applicable in the procurement actions. Milestone — a significant event in the project (key item or key event). Milestone schedule — see Summary schedule. Milestones for control — interim objectives, points of arrival in terms of time for purposes of progress management. Mind mapping — a technique for creating a visual representation of a multitude of issues or concerns by forming a map of the interrelated ideas. M.I.S. — management information systems. M.I.S. quality requirements — the process of organizing a project’s objectives, strategies, and resources for the M.I.S. data systems. Mitigation — the act of revising a project’s scope, budget, schedule, or quality, preferably without material impact on the project’s objectives, in order to reduce uncertainty on the project. Mixture — a combination of two distinct populations. On control charts a mixture is indicated by an absence of points near the centerline. Mode — the value that occurs most frequently in a data set. Moment of truth (MOT) — a MOT is described by Jan Carlzon, former CEO of Scandinavian Air Services, in the 1980s as: “Any episode where a customer comes into contact with any aspect of your company, no matter
SL316XCh16GlossaryFrame Page 503 Monday, September 30, 2002 8:02 PM
Glossary
503
how distant, and by this contact, has an opportunity to form an opinion about your company.” Monitoring — the capture, analysis, and reporting of actual performance compared to planned performance. Monitoring actuals vs. budget — one of the main responsibilities of cost management is to continually measure and monitor the actual cost vs. the budget in order to identify problems, establish variance, analyze the reasons for variance, and take the necessary corrective action. Changes in the forecast final cost are constantly monitored, managed, and controlled. Monte Carlo simulation — a computer modeling technique to predict the behavior of a system from the known random behaviors and interactions of a system’s component parts. A mathematical model of the system is constructed in the computer program, and the response of the model to various operating parameters, conditions, etc. can then be investigated. The technique is useful for handling systems whose complexity prevents analytical calculation. Monument — the point in a process that necessitates a product must wait in a queue before processing further; a barrier to continuous flow. Motivating — the process of inducing an individual to work toward achieving an organization’s objectives while also working to achieve personal objectives. Muda — a Japanese term that refers to an activity that consumes resources but creates no value; seven categories are part of the Muda concept. They are: correction, processing, inventory, waiting, overproduction, internal transport, and motion. N — Population sample size. n — sample size (the number of units in a sample). Natural team — a work group having responsibility for a particular process. NDE — nondestructive evaluation (see Nondestructive testing and evaluation). Near-critical activity — an activity that has low total float. Near-term activities — activities that are planned to begin, be in process, or be completed during a relatively short period of time, such as 30, 60, or 90 days. Negotiating — the process of bargaining with individuals concerning the transfer of resources, generation of information, and the accomplishment of activities. Network diagram — a schematic display of the sequential and logical relationship of the activities that comprise a project. Two popular drawing conventions or notations for scheduling are arrow and precedence diagramming. Networking — the exchange of information or services among individuals, groups, or institutions. Node — one of the defining points of a network; a junction point joined to some or all of the others by dependency lines.
SL316XCh16GlossaryFrame Page 504 Monday, September 30, 2002 8:02 PM
504
Six Sigma and Beyond: The Implementation Process
Nominal — for a product whose size is of concern: the desired mean value for the particular dimension, the target value. Nonconformance — a deficiency in characteristics, documentation, or procedure that renders the quality of material/service unacceptable or indeterminate. Nonconformity — the nonfulfillment of a specified requirement (see also defect and imperfection). Nondestructive testing and evaluation (NDT) — testing and evaluation methods that do not damage or destroy a product being tested. Nonlinearity (of a measuring instrument) — the deviation of an instrument’s response from linearity. Nonvalue-added — tasks or activities that can be eliminated with no deterioration in product or service functionality, performance, or quality in the eyes of the customer. Nonverbal communication — involving minimal use of the spoken language: gestures, facial expressions, and verbal fragments that communicate emotions without the use of words; sometimes known as body language. Nonwork unit — a calendar unit during which work may not be performed on an activity, such as weekends and holidays. Normal distribution — a probability distribution in the shape of a bell. This bell-shape distribution is for continuous data and is where most of the data are concentrated around the average; it is equally likely that an observation will occur above or below the average. It is significant to know that in this kind of distribution, the average, the middle, and the mode are the same. The normal distribution is a good approximation for a large class of situations. One example is the distribution resulting from the random additions of a large number of small variations. The Central Limits theorem expresses this for the distribution of means of samples; the distribution of means results from the random additions of a large number of individual measurements, each of which contributes a small variation of its own. Norms — behavioral expectations, mutually agreed-upon rules of conduct, protocols to be followed, social practice. Notice to proceed — formal notification to a supplier requesting the start of the work. Net present value (NPV) — a discounted cash-flow technique for finding the present value of each future year’s cash flow. Objective (time management) — a predetermined result; the end toward which effort is directed (contract/procurement management); used to define the method to follow and the service to be contracted or resource to be procured for the performance of work. Objective (general) — a quantitative statement of future expectations and an indication of when the expectations should be achieved; it flows from goals and clarifies what people must accomplish. Objective evidence — verifiable qualitative or quantitative observations, information, records, or statements of fact pertaining to the quality of an item
SL316XCh16GlossaryFrame Page 505 Monday, September 30, 2002 8:02 PM
Glossary
505
or service or to the existence and implementation of a quality system element. Observation — an item of objective evidence found during an audit. Operating characteristic (OC) curve — for a sampling plan, the OC curve indicates the probability of accepting a lot based on the sample size to be taken and the fraction defective in the batch. One-to-one marketing — the concept of knowing customers’ unique requirements and expectations and marketing to these (also see Customer relationship management). Open-book management — an approach to managing that exposes employees to an organization’s financial information, provides instruction in business literacy, and enables employees to better understand their role, contribution, and impact on the organization. Operation — the operation of a new facility is described by a variety of terms, each depicting an event in its early operating life. These are defined below, in chronological order: 1. Initial operation — the project milestone date on which material is first introduced into the system for the purpose of producing products. 2. Normal operation — the project milestone date on which the facility has demonstrated the capability of sustained operations at design conditions and the facility is accepted by the client. Operating characteristics curve — in acceptance sampling: a curve showing the probability of accepting a lot vs. the percentage of defective units in the lot. Opportunity — any event that generates an output (product, service, or information). Optimization — the achievement of planned process results that meet the needs of the customer and supplier alike and minimize their combined costs. Order of magnitude (–25, +75%) — this is an approximate estimate, made without detailed data, that is usually produced from cost capacity curves, scale-up or down factors that are appropriately escalated, and approximate cost capacity ratios. This type of estimate is used during the formative stages of an expenditure program for initial evaluation of a project. Other terms commonly used to identify an order of magnitude estimate are preliminary, conceptual, factored, quickie, feasibility, and SWAG. Organizational politics — the informal process by which personal friendships, loyalties, and enmities are used in an attempt to gain an advantage in influencing project decisions. Organization development — the use of behavioral science technology, research, and theory to change an organization’s culture to meet predetermined objectives involving participation, joint decision-making, and team building. Organization structure — identification of participants and their hierarchical relationships. Original duration — the first estimate of work time needed to execute an activity. The most common units of time are hours, days, and weeks.
SL316XCh16GlossaryFrame Page 506 Monday, September 30, 2002 8:02 PM
506
Six Sigma and Beyond: The Implementation Process
Other bid considerations — an evaluation of personnel and financial resources, facilities, performance record, responsiveness to contract terms and conditions, and the general willingness to perform the work. Overall quality philosophy — the universal belief and performance throughout the company, based on established quality policies and procedures. Those policies and procedures become the basis for collecting facts about a project in an orderly way for study (statistics). Panels — groups of customers recruited by an organization to provide ad hoc feedback on performance or product development ideas. Parallel structure — an organizational module in which groups, such as quality circles or a quality council, exist in the organization in addition to and simultaneously with the line organization (also referred to as collateral structure). Parameter design (Taguchi) — the use of design of experiments for identifying the major contributors to variation. Pareto chart — a basic tool used to graphically rank causes from most significant to least significant. It utilizes a vertical bar graph in which the bar height reflects the frequency or impact of causes. Parametric cost estimating — an estimating methodology using statistical relationships between historical costs and other project variables such as system physical or performance characteristics, contractor output measures, or manpower loading, etc.; also referred to as “top down” estimating. Pareto analysis — an analysis of the frequency of occurrence of various possible concerns. This is a useful way to decide quality control priorities when more than one concern is present. The underlying “Pareto Principle” states that a very small number of concerns is usually responsible for most quality problems. Pareto diagrams (quality) — a graph, particularly popular in nontechnical projects, to prioritize the few change areas (often 20% of the total) that cause most quality deviations (often 80% of the total). Partnership/alliance — a strategy leading to a relationship with suppliers or customers aimed at reducing costs of ownership, maintenance of minimum stocks, just-in time deliveries, joint participation in design, exchange of information on materials and technologies, new production methods, quality improvement strategies, and the exploitation of market synergy. Path, network — the continuous, linear series of connected activities through a network. Payback period — the number of years it will take the results of a project or capital investment to recover the investment from net cash flows. Payment authorization — the process of allocated fund transfer to an account from which the supplier can be paid for delivered goods/services as per contractual terms. PDM — see precedence diagram method.
SL316XCh16GlossaryFrame Page 507 Monday, September 30, 2002 8:02 PM
Glossary
507
PDM finish-to-finish relationship — this relationship restricts the finish of one work activity until some specified duration following the finish of another work activity. PDM finish-to-start relationship — a relationship in which one work activity may start just as soon as another work activity is finished. PDM start-to-finish relationship — a relationship that restricts the finish of one work activity until some duration following the start of another work activity. PDM start-to-start relationship — this relationship restricts the start of one work activity until some specified duration following the start of some preceding work activity. PDSA cycle — plan-do-study-act cycle (a variation of PDCA). Percent complete — a ratio comparison of the completion status to the current projection of total work. Performance — the calculation of achievement used to measure and manage project quality. Performance control — control of work during contract execution. Performance evaluation — the formal system by which managers evaluate and rate the quality of subordinates’ performance over a given period of time. Performance management system — a system that supports and contributes to the creation of high-performance work and work systems by translating behavioral principles into procedures. Performance plan — a performance management tool that describes desired performance and provides a way to assess the performance objectively. Personal recognition — the public acknowledgement of an individual’s performance on a project. Personal rewards — providing an individual with psychological or monetary benefits in return for his or her performance. Personnel training — the development of specific job skills and techniques required by an individual to become more productive. Persuade — to advise, to move by argument, entreaty, or expostulation to a belief, position, or course of action. Program evaluation and review technique (PERT) — an event- and probability-based network analysis system generally used in the research and development field where, at the planning stage, activities and their durations between events are difficult to define. Typically used on large programs where the projects involve numerous organizations at widely different locations. Phase — see project phase. Plan — an intended future course of action. Plan development — stage of planning during which the plan is initially created. Plan-do-check-act cycle (PDCA) — a four-step process for quality improvement. In the first step (plan), a plan to effect improvement is developed. In the second step (do), the plan is carried out, preferably on a small scale.
SL316XCh16GlossaryFrame Page 508 Monday, September 30, 2002 8:02 PM
508
Six Sigma and Beyond: The Implementation Process
In the third step (check), the effects of the plan are observed. In the last step (act), the results are studied to determine what was learned and what can be predicted. The plan-do-check-act cycle is sometimes referred to as the Shewhart cycle because Walter A. Shewhart discussed the concept in his book Statistical Method from the Viewpoint of Quality Control and as the Deming cycle because W. Edwards Deming introduced the concept in Japan. The Japanese subsequently called it the Deming cycle. Planned activity — an activity that has not started or finished prior to the data date. Planner time — manhours related to the planning function. Planning (phase) — see development (phase). Platykurtosis — for frequency distributions: a distribution which has longer “tails” than a normal distribution with the same standard deviation. Plug date — a date externally assigned to an activity that establishes the earliest or latest date on which an activity is allowed to start or finish. PM function — see function. Point estimate — in statistics, a single-value estimate of a population parameter. Point estimates are commonly referred to as the points at which the interval estimates are centered; these estimates give information about how much uncertainty is associated with the estimate. Poisson distribution — a probability distribution for the number of occurrences of an event; n = number of trials, p = probability that the event occurs for a single trial, r = the number of trials for which the event occurred. The Poisson distribution is a good approximation of the binomial distribution for a case where p is small. A simpler way to say this is: a distribution used for discrete data, applicable when there are many opportunities for occurrence of an event but a low probability (less than 0.10) on each trial. Policy — directives issued by management for guidance and direction where uniformity of action is essential. Directives pertain to the approach, techniques, authorities, and responsibilities for carrying out a management function. Policies/procedures — see project policies. Population (statistical) — the set of all possible outcomes of a statistical determination; a group of people, objects, observations, or measurements about which one wishes to draw conclusions. The population is usually considered as an essentially infinite set from which a subset called a sample is selected to determine the characteristics of the population, i.e., if a process were to run for an infinite length of time, it would produce an infinite number of units. The outcome of measuring the length of each unit would represent a statistical universe, or population. Any subset of the units produced (say, a hundred of them collected in sequence) would represent a sample of the population. Also known as universe. Post contract evaluations — objective performance review and analysis of both parties’ performance; realistic technical problems encountered and the corrective actions taken.
SL316XCh16GlossaryFrame Page 509 Monday, September 30, 2002 8:02 PM
Glossary
509
Post processing — processing of data after they are collected, usually done by computer. Post project analysis and report — a formal analysis and documentation of a project’s results including cost, schedule, and technical performance vs. the original plan. Post project evaluation — an appraisal of the costs and technical performance of a completed project and the development of new applications in project management methods to overcome problems that occurred during the project life to benefit future projects. Preaward meetings — meetings to aid ranking of prospective suppliers before final award determination and to examine their facilities or capabilities. Precedence diagram method (PDM) — a method of constructing a logic network using nodes to represent the activities and connecting them by lines that show dependencies. Precedence diagram method arrow — a graphical symbol in PDM networks used to represent the LAG describing the relationship between work activities. Precision — a characteristic of measurement that addresses the consistency or repeatability of a measurement system when the identical item is measured a number of times. Predecessor activity — any activity that exists on a common path with the activity in question and occurs before the activity in question. Precision (of measurement) — the extent to which repeated measurement of a standard with a given instrument yields the same result. Prescribe — to direct specified action. To prescribe implies that action must be carried out in a specified fashion. Prevention vs. detection — a term used to contrast two types of quality activities. Prevention refers to those activities designed to prevent nonconformances in products and services. Detection refers to those activities designed to detect nonconformances already in products and services. Another term used to describe this distinction is “designing in quality vs. inspecting in quality.” Preventive action — action taken to eliminate the causes of a potential nonconformity; defect, or other undesirable situation in order to prevent occurrence. Primary — process that refers to the basic steps or activities that will produce an output without the “nice-to-haves.” Priorities — the imposed sequences desired with respect to the scheduling of activities within previously imposed constraints. Priorities matrix — a tool used to choose between several options that have many useful benefits but where not all of them are of equal value. Probability distribution — a relationship giving the probability of observing each possible outcome of a random event. The relationship may be given by a mathematical expression, or it may be given empirically by drawing a frequency distribution for a large enough sample.
SL316XCh16GlossaryFrame Page 510 Monday, September 30, 2002 8:02 PM
510
Six Sigma and Beyond: The Implementation Process
Probability (mathematical) — the likelihood that a particular occurrence (event) has a particular outcome. In mathematical terms, the probability that outcome x occurs is expressed by the formula: P(x) = (number of trials giving outcome x/total number of trials) Note that, because of this definition, summing up the probabilities for all values of x always gives a total of 1; this is another way of saying that each trial must have exactly one outcome. Problem/need statement/goal — documentation to define a problem, to document the need to find a solution, and to document the overall aim of the sponsor. Problem resolution — the interaction between the project manager and an individual team member with the goal of finding a solution to a technical or personal problem that affects project accomplishment. Problem solving — a rational process for identifying, describing, analyzing, and resolving situations in which something has gone wrong without explanation. Procedure — a prescribed method of performing specified work. A document that answers the questions: What has to be done? Where is it to be done? When is it to be done? Who is to do it? Why must it be done? (Contrasted with a work instruction that answers: How is it to be done? With what materials and tools is it to be done?); in the absence of a work instruction, the instructions may be embedded in the procedure. Process — an activity or group of activities that takes an input, adds value to it, and provides an output to an internal or external customer; a planned and repetitive sequence of steps by which a defined product or service is delivered. In manufacturing the elements are: machine, method, material, measurement, mother nature, manpower. In nonmanufacturing, the elements are: manpower, place, policy, procedures, measurement, environment. Process (framework) — the set of activities by means of which an output is achieved; a series of actions or operations that produce a result (especially a continuous operation). Process improvement — the act of changing a process to reduce variability and cycle time and make the process more effective, efficient, and productive. Process improvement team (PIT) — a natural work group or cross-functional team whose responsibility is to achieve needed improvements in existing processes. The life span of the team is based on the completion of the team purpose and specific goals. Process management — the collection of practices used to implement and improve process effectiveness; it focuses on holding the gains achieved through process improvement and assuring process integrity. Process mapping — the flowcharting of a work process in detail, including key measurements.
SL316XCh16GlossaryFrame Page 511 Monday, September 30, 2002 8:02 PM
Glossary
511
Process organization — a form of departmentalization where each department specializes in one phase of the process. Process owner — the manager or leader who is responsible for ensuring that the total process is effective and efficient. Procurement addendum — a supplement to bidding documents issued prior to the receipt of bids for the purpose of clarifying, correcting, or adding to the bid documents issued previously. Procurement advertising — method of procurement where a contract results from the solicitation of competitive bids through the media. Procurement/contract negotiations — a process of communication, discussions, and agreement between parties for supply of goods/services in support of procurement objectives. Procurement cost considerations — a reckoning of a supplier’s approach, realism, and reasonableness of cost, forecast of economic factors affecting cost, and cost risks used in the cost proposal. Procurement environment — the combined internal and external forces, both isolated and in concert, that assist or restrict the attainment of an objective. These could be business- or project-related or may be due to political, economic, technological or regulatory conditions. See also Macro procurement environment, Micro procurement environment. Procurement identification — the identification of the different categories of procurement of which one or more may be required during project execution. Procurement invitation — a method of procurement where a contract results from the selected invitation of competitive bids. Procurement: other considerations — includes an evaluation of staff and financial resources, facilities, performance record, responsiveness to contract terms and conditions and a general willingness to perform the work. Procurement performance evaluation — a comprehensive review of the original specification, statement of work, scope, and contract modifications for the purpose of avoiding pitfalls in future procurements. Procurement prequalifications — the experience, past performance, capabilities, resources, and current workloads of potential sources. Procurement qualifications — see qualifications, contractor. Procurement: sole source — the only source that could fulfill the requirements of procurement. See Contract-procurement management. Procurement ranking — qualitative or quantitative determinations of prospective suppliers’ capabilities and qualifications in order to select one or more sources to provide proposed material/service. Procurement relationship with CWBS (contract work breakdown structure) — the relationship of services or items to be procured with the overall work and their interface with any other project activities. Procurement response — communications, positive or negative, from prospective suppliers in response to the request to supply material/services.
SL316XCh16GlossaryFrame Page 512 Monday, September 30, 2002 8:02 PM
512
Six Sigma and Beyond: The Implementation Process
Procurement: source evaluation — overall review of capabilities and ranking of prospective suppliers either to request proposals or to enter into negotiations for the award of a contract. Procurement: sources selection — the process of selecting organizations or individuals whose resources, credibility, and performance are expected to meet the contract/procurement objectives. Procurement strategy — the relationship of specific procurement actions to the operating environment of the project. Procurement supplier valuation — assessment of suppliers’ qualifications in order to identify those from whom proposals/bids are to be requested or those who are to be invited to enter negotiations for the award of a contract. Procurement technical considerations — suppliers’ technical competency, understanding of the technical requirements, and capability to produce technically acceptable material or services. Generally this evaluation ranks highest among all other evaluations. Procurement/tender documents — the documents issued to prospective suppliers when inviting bids/quotations for supply of goods/services. Producer’s risk — the maximum probability of saying a process or lot is unacceptable when, in fact, it is acceptable. Product/service liability — the obligation of a company to make restitution for loss related to personal injury, property damage, or other harm caused by its product or service. Productivity — the measurement of labor efficiency when compared to an established base. It is also used to measure equipment effectiveness, drawing productivity, etc. Profitability — a measure of the total income of a project compared to the total monies expended at any period of time. the techniques that are utilized are payout time, return on original investment (ROI), net present value (NPV), discounted cash flow (DCF), sensitivity, and risk analysis. Profound knowledge, system of — as defined by W. Edwards Deming, states that learning cannot be based on experience only; it requires comparisons of results to a prediction, plan, or an expression of theory. Predicting why something happens is essential to understand results and to continually improve. The four components of the system of profound knowledge are: 1. 2. 3. 4.
Appreciation for a system Knowledge of variation Theory of knowledge Understanding of psychology
Program — an endeavor of considerable scope encompassing a number of projects. Program management — the management of a related series of projects executed over a broad period of time that are designed to accomplish broad goals and to which the individual projects contribute.
SL316XCh16GlossaryFrame Page 513 Monday, September 30, 2002 8:02 PM
Glossary
513
Progress — development to a more advanced state. Progress relates to a progression of development and therefore shows relationships between current conditions and past conditions. Progress analysis — (time management) the evaluation of calculated progress against the approved schedule and the determination of its impact (cost management). The development of performance indices such as: 1. Cost Performance Index = CPI = BCWP/ACWP 2. Schedule Performance Index = SPI = BCWP/BCWS 3. Productivity Progress payments — interim payment for delivered work in accordance with contract terms generally tied to meeting specified performance milestones. Progress trend — an indication of whether the progress rate of an activity or project is increasing, decreasing, or remaining the same (steady) over a period of time. Project — any undertaking with a defined starting point and defined objectives by which completion is identified. In practice, most projects depend on finite or limited resources by which the objectives are to be accomplished. Project accounting — the process of identifying, measuring, recording, and communicating actual project cost data. Project archive tape — a computer tape that contains the contract base project plan, the target–project plan, and every subsequent update of the project plan. Project brief — see project plan. Project budget — the amount and distribution of money allocated to a project. Project change — an approved change to project work content caused by a scope of work change or a special circumstance on the project (weather, strikes, etc.). See also project cost changes. Project close-out — a process that provides for acceptance of a project by the project sponsor, completion of various project records, final revision, and issue of documentation to reflect the “as-built” condition and the retention of essential project documentation. See project life cycle. Project close-out and start-up costs — the estimated extra costs (both capital and operating) that are incurred during the period from the completion of project implementation to the beginning of normal revenue earnings on operations. Project cost — the actual costs of an entire project. Project cost changes — the changes to a project and the initiating of the preparation of detail estimates to determine the impact on project costs and schedule. These changes must then be communicated clearly (both written and verbally) to all participants so that they know that approval/rejection of the project changes has been obtained (especially those which change the original project intent). Project cost systems — the establishment of a project cost accounting system of ledgers, asset records, liabilities, write-offs, taxes, depreciation expense, raw materials, prepaid expenses, salaries, etc.
SL316XCh16GlossaryFrame Page 514 Monday, September 30, 2002 8:02 PM
514
Six Sigma and Beyond: The Implementation Process
Project data gaps — identification of data gaps in available information in reference to a particular procurement. Project data review — review of qualification data to determine their adequacy. Project data verification — verification of qualification data to check their accuracy. Project duration — the elapsed time from project start date through project finish date. Project environment — see environment. Project finish date/schedule — the latest schedule calendar finish date of all activities on the project derived from network or resource allocation process calculations. Project goods — equipment and materials needed to implement a project. Project information sources — identification and listing of various available sources, internal as well as external, to provide relevant information on specific procurements. Project integration — the bringing together of diverse organizations, groups, or parts to form a cohesive whole to successfully achieve project objectives. Project investment cost — the activity of establishing and assembling all the cost elements (capital and operating) of a project as defined by an agreed scope of work. The estimate attempts to predict the final financial outcome of a future investment program even though all the parameters of the project are not yet fully defined. Project life cycle — the four sequential phases in time through which any project passes, namely, concept, development, execution (implementation or operation), and finishing (termination or closeout). Note that these phases may be broken down into further stages, depending on the area of project application. Sometimes these phases are known as: concept, planning, design, implementation, and evaluation. Project management (PM) — the art of directing and coordinating human and material resources throughout the life of a project by using modern management techniques to achieve predetermined objectives of scope, cost, time, quality, and participant satisfaction. Project manager — the individual appointed with responsibility for project management of the project. Project manual — see project policies/procedures. Project objectives — project scope expressed in terms of outputs, required resources, and timing. Project organization — the orderly structuring of project participants. Project personnel — those members of a project team employed directly by the organization responsible for the project. Project phase — the division of a project time frame (or project life cycle) into the largest logical collection of related activities. Project plan (in PM) — a management summary document that gives the essentials of a project in terms of its objectives, justification, and how the objectives are to be achieved. It should describe how all the major activities
SL316XCh16GlossaryFrame Page 515 Monday, September 30, 2002 8:02 PM
Glossary
515
under each project management function are to be accomplished including that of overall project control. The project plan will evolve through successive stages of the project life cycle. Prior to project implementation, for example, it maybe referred to as a project brief. See also baseline and baseline concept. Project plan (general) — all the documents that comprise the details of why a project is to be initiated, what the project is to accomplish, when and where it is to be implemented, who will have responsibility, how the implementation will be carried out, how much it will cost, what resources are required, and how the project’s progress and results will be measured. Project planning — the identification of project objectives and the ordered activity necessary for project completion; the identification of resource types and quantities required to carry out each activity or task. Project policies — general guidelines/formalized methodologies on how a project will be managed. Project preselection meetings — meetings held to supplement and verify qualifications, data, and specifications. Project procedures — the methods, practices, and policies (both written and verbal communications) that will be used during a project’s life. Project procurement strategy — the relationship of specific procurement actions to the operating environment of a project. Project reporting — a planning activity involved with the development and issuance of (internal) time management analysis reports and (external) progress reports. Project risk — the cumulative effect of the chances of uncertain occurrences that will adversely affect project objectives. It is the degree of exposure to negative events and their probable consequences. Project risk is characterized by three factors: risk event, risk probability, and the amount at stake. Project risk analysis — analysis of the consequences and probabilities that certain undesirable events will occur and their impact on achieving contract/procurement objectives. Project risk characterization — identifying the potential external or internal risks associated with procurement actions using estimates of probability of occurrence. Project segments — project subdivisions expressed as manageable components. Project services — expertise and/or labor needed to implement a project not available directly from a project manager’s organization. Project stage — a subset of project phase. Project start date/schedule — the earliest calendar start date among all activities in a network. Project team (framework) — the central management group of the project. The group of people, considered as a group, that shares responsibility for the accomplishment of project goals and who report either part-time or full-time to the project manager.
SL316XCh16GlossaryFrame Page 516 Monday, September 30, 2002 8:02 PM
516
Six Sigma and Beyond: The Implementation Process
Proposal project plan — usually the first plan issued on a project and accompanies the proposal. It contains key analysis, procurement, and implementation milestones; historical data; and any client-supplied information. Usually presented in bar chart form or summary-level network and is used for inquiry and contract negotiations. Prospectus — the assembly of the evaluation profitability studies and all the pertinent technical data in an overall report for presentation and acceptance by the owner and funders of a project. Psychographic customer characteristics — variables among buyers in the consumer market that address lifestyle issues and include consumer interests, activities, and opinions’ pull system: (see Kanban). Public, the (project external) — all those that are not directly involved in the project but who have an interest in its outcome. This could include, for example, environmental protection groups, Equal Employment Opportunity groups, and others with a real or imagined interest in the project or the way it is managed. Public, the (project internal) — all personnel working directly or indirectly on a project. Public relations — an activity designed to improve the environment in which an organization operates in order to improve the performance of that organization. Punch list — a list made near the completion of a project showing the items of work remaining in order to complete the project scope. Purchase — outright acquisition of items, mostly off-the-shelf or catalog, manufactured outside the purchaser’s premises. Qualifications: contractor — a review of the experience, past performance, capabilities, resources, and current workloads of potential service resources. Quality — a subjective term for which each person has his or her own definition. In technical usage, quality can have two meanings: (1) the characteristics of a product or service that bear on its ability to satisfy stated or implied needs and (2) a product or service free of deficiencies. Quality adviser — the person (facilitator) who helps team members work together in quality processes and is a consultant to the team. The adviser is concerned about the process and how decisions are made rather than about which decisions are made. In the Six Sigma initiative, this person is also called champion. Quality assessment — the process of identifying business practices, attitudes, and activities that are enhancing or inhibiting the achievement of quality improvement in an organization. Quality assurance/quality control (QA/QC) — two terms that have many interpretations because of the multiple definitions for the words assurance and control. For example, assurance can mean the act of giving confidence, the state of being certain, or the act of making certain; control can mean an evaluation to indicate needed corrective responses, the act of guiding, or the state of a process in which the variability is attributable to a constant
SL316XCh16GlossaryFrame Page 517 Monday, September 30, 2002 8:02 PM
Glossary
517
system of chance causes. (For a detailed discussion on the multiple definitions, see ANSI/ISO/ASQC A35342, Statistics-Vocabulary and Symbols-Statistical Quality Control.) One definition of quality assurance is: all the planned and systematic activities implemented within the quality system that can be demonstrated to provide confidence that a product or service will fulfill requirements for quality. One definition for quality control is: the operational techniques and activities used to fulfill requirements for quality. Often, however, quality assurance and quality control are used interchangeably, referring to the actions performed to ensure the quality of a product, service, or process. The focus of assurance is planning and that of control is appraising. Quality assurance — (contract/procurement management) planned and systematic actions necessary to provide adequate confidence that the performed service or supplied goods will serve satisfactorily for its intended and specified purpose (managerial). The development of a comprehensive program that includes the processes of identifying objectives and strategy, of client interfacing, and of organizing and coordinating planned and systematic controls for maintaining established standards. This in turn involves measuring and evaluating performance to these standards, reporting results, and taking appropriate action to deal with deviations. Quality control (technical) — the planned process of identifying established system requirements and exercising influence through the collection of specific (usually highly technical and itself standardized) data. The basis for decision on any necessary corrective action is provided by analyzing the data and reporting it comparatively to system standards. Quality evaluation methods — the technical process of gathering measured variables or counted data for decision-making in quality process review. Normally these evaluation methods should operate in a holistic context involving proven statistical analysis, referred to previously as statistical process control. A few example methods are: graphs and charts; Pareto diagrams; and exception reporting. Quality loop — conceptual model of interacting activities that influence quality at the various stages ranging from the identification of needs to the assessment of whether those needs are satisfied. Quality loss function — a parabolic approximation (Taylor’s Series) of the quality loss that occurs when a quality characteristic deviates from its target value. The quality loss function is expressed in monetary units: the cost of deviating from the target increases as a quadratic function the further the quality characteristic moves from the target. The formula used to compute the quality loss function depends on the type of quality characteristic being used. The quality loss function was first introduced in this form by Genichi Taguchi. Quality characteristics — the unique characteristics of products and of services by which customers evaluate their perception of quality. Quality council — (sometimes called “quality steering committee”) the group driving the quality improvement effort and usually having oversight
SL316XCh16GlossaryFrame Page 518 Monday, September 30, 2002 8:02 PM
518
Six Sigma and Beyond: The Implementation Process
responsibility for the implementation and maintenance of the quality management system; operates in parallel with the normal operation of the business. Quality engineering — the analysis of a manufacturing system at all stages to maximize the quality of the process itself and the products it produces. Quality function — the entire collection of activities through which an organization achieves fitness for use, no matter where these activities are performed. Quality improvement — actions taken throughout an organization to increase the effectiveness and efficiency of activities and processes in order to provide added benefits to both the organization and its customers. Quality level agreement (QLA) — internal service/product with which providers assist their internal customers in clearly delineating the level of service/product required in quantitative measurable terms. A QLA may contain specifications for accuracy, timeliness, quality/usability, product life, service availability, responsiveness to needs, etc. Quality management — quality itself is the composite of material attributes (including performance features and characteristics) of the product, process, or service that are required to satisfy the need for which the project is launched. Quality policies, plans, procedures, specifications, and requirements are attained through the subfunctions of quality assurance (managerial) and quality control (technical). Therefore, QM is viewed as the umbrella for all activities of the overall management function that determine the quality policy, objectives, and responsibilities and implement them by means such as quality planning, quality control, quality assurance, and quality improvement within the quality system. Quality metrics — numerical measurements that give an organization the ability to set goals and evaluate actual performance vs. plan. Quality plan — the document setting out the specific quality practices, resources, and sequence of activities relevant to a particular product, project, or contract; also known as control plan. Quality planning — the activity of establishing quality objectives and quality requirements. Quality process review — the technical process of using data to decide how the actual project results compare with the quality specifications/requirements. If deviations occur, this analysis may cause changes in the project design, development, use, etc., depending on the decisions of the client, involved shareholders, and project team. Quality system — the organizational structure, procedures, processes, and resources needed to implement quality management. Quality trilogy — a three-pronged approach to managing for quality. The three legs are quality planning (developing the products and processes required to meet customer needs), quality control (meeting product and process goals), and quality improvement (achieving unprecedented levels of performance); attributed to Joseph M. Juran. Questionnaires — see Surveys.
SL316XCh16GlossaryFrame Page 519 Monday, September 30, 2002 8:02 PM
Glossary
519
Queue processing — processing in batches (contrast with continuous flow processing). Queue time — wait time of product awaiting next step in process. Random — varying with no discernible pattern. Random number generator — used to select a stated quantity of random numbers from a table of random numbers; the resulting selection is then used to pull specific items or records corresponding to the selected numbers to comprise a “random sample.” Random sample — the process of selecting a sample of size n where each part in the lot or batch has an equal probability of being selected. Random sampling — a sampling method in which every element in a population has an equal chance of being included. Range — measure of dispersion, that is, the difference between the highest and lowest of a group of values. Ratio analysis — the process of relating isolated business numbers, such as sales, margins, expenses, debt, and profits, to make them meaningful. Rational subgroup — a subgroup that is expected to be as free as possible from assignable causes (usually consecutive items). In control charting: a subgroup of units selected to minimize the differences due to assignable causes. Usually samples taken consecutively from a process operating under the same conditions will meet this requirement. Real time — the application of external time constraints that might affect the calendar time position of execution of each activity in a schedule. Recommend — to offer or suggest for use. Recommendation describes the presentation of plans, ideas, or things to others for adoption. To recommend is to offer something with the option of refusal. Record retention — the necessity to retain records for reference for a specified period after contract close-out, in case they are needed. Records management — the procedures established by an organization to manage all documentation required for the effective development and application of its work force. Recovery schedule — a special schedule showing special efforts to recover time lost (compare master schedule). Recruitment, selection, and job placement — attracting a pool of potential employees, determining which of those employees is best suited for work on the project, and matching that employee to the most appropriate task based on his or her skills and abilities. Refinement — The reworking, redefinition, or modification of the logic or data that may have been previously developed in the planning process as required to properly input milestones, restraints, and priorities. Regression analysis — a study used to understand the relationship between two or more variables; in other words, a technique for determining the mathematical relation between a measured quantity and the variables it depends on. The relationship can be determined and expressed as a mathematical equation. For example, the method might be used to determine the mathematical form of the probability distribution from which a sample was
SL316XCh16GlossaryFrame Page 520 Monday, September 30, 2002 8:02 PM
520
Six Sigma and Beyond: The Implementation Process
drawn, by determining which form best “fits” the frequency distribution of the sample. The frequency distribution is the “measured quantity,” and the probability distribution is a “mathematical relation.” Regulatory personnel — those individuals working for government regulatory agencies whose task it is to assure compliance with their particular agency’s requirements. Reliability — in measurement system analysis, refers to the ability of an instrument to produce the same results over repeated administration to measure consistently. In reliability engineering, it is the probability of a product performing its intended function under stated conditions for a given period of time (see also: mean time between failures). Remaining available resource — the difference between the resource availability pool and the level schedule resource requirements. Computed from the resource allocation process. Remaining duration — the estimated work units needed to complete an activity as of the data date. Remaining float (RF) — the difference between the early finish and the late finish date. Remedy — something that eliminates or counteracts a problem cause; a solution. Repair — action taken on a nonconforming product so that it will fulfill the intended usage requirements, although it may not conform to the originally specified requirements. Repeatability and reproducibility (R & R) — a measurement-validation process to determine how much variation exists in the measurement system (including the variation in product, the gauge used to measure, and the individuals using the gauge). Repeatability (of a measurement) — the extent to which repeated measurements of a particular object with a particular instrument produce the same value. Reporting — planning activity involved with the development and issuance of (internal) time management analysis reports and (external) progress reports. Reproducibility — the variation between individual people taking the same measurement and using the same gauging. Request for proposal — a formal invitation containing a scope of work that seeks a formal response (proposal) describing both methodology and compensation to form the basis of a contract. Request for quotation — a formal invitation to submit a price for goods or services as specified. Reschedule — the process of changing the logic, duration, or dates of an existing schedule in response to externally imposed conditions. Resistance to change — unwillingness to change beliefs, habits, and ways of doing things. Resolution (of a measurement) – the smallest unit of measure that an instrument is capable of indicating.
SL316XCh16GlossaryFrame Page 521 Monday, September 30, 2002 8:02 PM
Glossary
521
Resource — any factors, except time, required or consumed to accomplish an activity. Any substantive requirement of an activity that can be quantified and defined, e.g., manpower, equipment, material, etc. Resource allocation process — the scheduling of activities in a network with the knowledge of certain resource constraints and requirements. This process adjusts activity-level start and finish dates to conform to resource availability and use. Resource availability date — the calendar date when a resource pool becomes available for a given resource. Resource availability pool — the amount of resource availability for any given allocation period. Resource code — the code used to identify a given resource type. Resource description — the actual name or identification associated with a resource code. Resource identification — identification of potential sources that could provide the specified material or services. These sources could be identified either from the firm/project list of vendors or by advertising the need of procurement. Resource-limited planning — the planning of activities so that predetermined resource availability pools are not exceeded. Activities are started as soon as resources are available (subject to logic constraints), as required by the activity. Response planning — the process of formulating suitable risk management strategies for a project including the allocation of responsibility to the project’s various functional areas. It may involve mitigation, deflection, and contingency planning. It should also make some allowance, however tentative, for completely unforeseen occurrences. Resource plots — a display of the amount of resources required as a function of time on a graph. Individual, summary, incremental, and cumulative resource curve levels can be shown. Resource requirements matrix — a tool to relate the resources required to the project tasks requiring them (used to indicate types of individuals needed, material needed, subcontractors, etc.). Response surface methodology (RSM) — a method of determining the optimum operating conditions and parameters of a process by varying the process parameters and observing the results on the product. This is the same methodology used in evolutionary operations (EVOP), but it is used in process development rather than actual production, so that strict adherence to product tolerances need not be maintained. An important aspect of RSM is to consider the relationships among parameters and the possibility of simultaneously varying two or more parameters to optimize the process. Response system — the ongoing process put in place during the life of the project to monitor, review, and update project risk and make the necessary adjustments. Examination of the various risks will show that some risks are greater in some stages of the project life cycle than in others.
SL316XCh16GlossaryFrame Page 522 Monday, September 30, 2002 8:02 PM
522
Six Sigma and Beyond: The Implementation Process
Responsibility — charged personally with the duties, assignments, and accountability for results associated with a designated position in the organization. Responsibility can be delegated, but it cannot be shared. Responsibility charting — the activity of clearly identifying personnel and staff responsibilities for each task within a project. Restraint — an externally imposed factor affecting when an activity can be scheduled. The external factor may be labor, cost, equipment, or other such resource. Review — to examine critically to determine suitability or accuracy. Risk assessment — review, examination, and judgment about whether or not the identified risks are acceptable in the proposed actions. Risk data applications — the development of a database of risk factors both for the current project and as a matter of historic record. Risk deflection — the act of transferring all or part of a risk to another party, usually by some form of contract. Risk event — the precise description of what might happen to the detriment of a project. Risk factor — any one of risk event, risk probability, or amount at stake, as defined above. Risk identification — the process of systematically identifying all possible risk events that may impact a project. The risk events may be conveniently classified according to their cause or source and ranked roughly according to their ability to manage effective responses. Not all risk events will impact all projects, but the cumulative effect of several risk events occurring in conjunction with each other may well be more severe than the examination of the individual risk events would suggest. Risk management — the art and science of identifying, analyzing, and responding to risk factors throughout the life of a project and in the best interests of its objectives. Risk mitigation — the act of revising a project’s scope, budget, schedule, or quality, preferably without material impact on the project’s objectives, in order to reduce uncertainty on the project. Risk probability — the degree to which the risk event is likely to occur. Risk response planning — the process of formulating suitable risk management strategies for a project, including the allocation of responsibility to the project’s various functional areas. It may involve risk mitigation, risk deflection, and contingency planning. It should also make some allowance, however tentative, for completely unforeseen occurrences. Risk response system — the ongoing process put in place during the life of the project to monitor, review, and update project risk and make the necessary adjustments. Examination of the various risks will show that some risks are greater in some stages of the project life cycle than in others. S — symbol used to represent standard deviation of a sample. σ hat — symbol used to represent the estimated standard deviation given by the formula: Rbar/d2
SL316XCh16GlossaryFrame Page 523 Monday, September 30, 2002 8:02 PM
Glossary
523
The estimated standard deviation may be used only if the data is normally distributed and the process is in control. Salary administration — the formal system by which an organization manages its financial commitments to its employees. It includes manhour accounting and the development of a logical structure for compensation. Sales leveling — a strategy of establishing a long-term relationship with customers to lead to contracts for fixed amounts and scheduled deliveries in order to smooth the flow and eliminate surges. Sample — a finite number of items of a similar type taken from a population for the purpose of examination to determine whether all members of the population would conform to quality requirements or specifications. Sample size — the number of units in a sample chosen from a population. Sampling — the process of drawing conclusions about a population based on a part of the population. Sample — (statistics) a representative group selected from a population. The sample is used to determine the properties of the population. Sample size — the number of elements, or units, in a sample. Sampling — the process of selecting a sample of a population and determining the properties of the sample. The sample is chosen in such a way that its properties are representative of the population. Sampling variation — the variation of a sample’s properties from the properties of the population from which it was drawn. S curves — graphical display of accumulated costs, labor hours, or quantities plotted against time for both budgeted and actual amounts. Scatter plot — for a set of measurements of two variables on each unit of a group: a plot on which each unit is represented as a dot at the x,y position corresponding to the measured values for the unit. The scatter plot is a useful tool for investigating the relationship between the two variables. Scatter diagram — a graphical technique to analyze the relationship between two variables. Two sets of data are plotted on a graph, with the y-axis used for the variable to be predicted and the x-axis used for the variable to make the prediction. The graph will show possible relationships (although two variables might appear to be related, they might not be: those who know most about the variables must make that evaluation). The scatter diagram is one of the seven tools of quality. Scenario planning — a strategic planning process that generates multiple stories about possible future conditions, allowing an organization to look at the potential impact on them and different ways they could respond. Schedule — a display of project time allocation. Schedule: pictorial display — a display in the form of a still picture, slide, or video that represents scheduling information. Schedule refinement — the reworking, redefinition, or modification of the logic or data that may have previously been developed in the planning process as required to properly input milestones, restraints, and priorities. Schedule revision — in the context of scheduling, a change in the network logic or in resources that requires redrawing part or all of the network.
SL316XCh16GlossaryFrame Page 524 Monday, September 30, 2002 8:02 PM
524
Six Sigma and Beyond: The Implementation Process
Schedule status — see scope reporting. Schedule update — revision of a schedule to reflect the most current information on a project. Schedule variance — any difference between the projected duration of an activity and the actual duration of the activity; also, the difference between projected start and finish dates and actual or revised start and finish dates. Schedule work unit — a calendar time unit when work may be performed on an activity. Scheduling — the recognition of realistic time and resource restraints that will, in some way, influence the execution of a plan. Scientific management — aimed at finding the one best way to perform a task so as to increase productivity and efficiency. Scope — the work content and products of a project or component of a project. Scope is fully described by naming all activities performed, the resources consumed, and the end products that result including quality standards. A statement of scope should be introduced by a brief background to the project, or component, and the general objectives. Scope baseline — summary description of a project’s or component’s original content and end product including basic budgetary and time-constraint data. Scope baseline approval — approval of the scope baseline by the appropriate authority (project sponsors and senior project management staff). Scope change — a deviation from the originally agreed project scope. Scope constraints — applicable restrictions that will affect the scope. Scope cost — basic budgetary constraints. Scope criteria — standards or rules composed of parameters to be considered in defining the project. Scope interfaces — points of interaction between the project or its components and its/their respective environments. Scope management — the function of controlling a project in terms of its goals and objectives through the processes of conceptual development, full definition or scope statement, execution, and termination. Scope of work — a narrative description of the work to be accomplished or resource to be supplied. Scope performance/quality — basic objective of a project. Defines the characteristics of the project’s end product as required by the sponsor. Scope reporting — a process of periodically documenting the status of basic project parameters during the course of a project. The three areas of scope reporting are: • Cost Status — as affecting financial status. • Schedule Status — as affecting time constraint status. • Technical Performance Status — as affecting quality. Scope schedule — basic time constraints. Scope statement — a documented description of the project as to its output, approach, and content.
SL316XCh16GlossaryFrame Page 525 Monday, September 30, 2002 8:02 PM
Glossary
525
Screening — techniques used for reviewing, analyzing, ranking, and selecting the best alternative for the proposed action. Secondary float (SF) — the difference between the CPM calculated early finish and the imposed finish date. Semantics — the language used to achieve a desired effect on an audience. Sensitivity — (of a measuring instrument) the smallest change in the measured quantity that an instrument is capable of detecting. Service and support personnel — those individuals working in functions such as personnel, accounting, maintenance, and legislative relations that are needed to keep the “primary functions” operating effectively. Shape — pattern or outline formed by the relative position of a large number of individual values obtained from a process. Short term plan — a short duration schedule, usually 4 to 8 weeks, used to show in detail the activities and responsibilities for a particular period; a management technique often used “as needed” or in a critical area of a project. Short term schedule — see short term plan. σ) — the standard deviation of a statistical population. Sigma (σ Simulation (modeling) — using a mathematical model of a system or process to predict the performance of the real system. The model consists of a set of equations or logic rules that operate on numerical values representing the operating parameters of the system. The result of the equations is a prediction of the system’s output. SIPOC — a macro-level analysis of suppliers, inputs, processes, outputs, and customers. Skewness — a measure of a distribution’s symmetry. A skewed distribution shows a longer-than-normal tail on the right or left side of a distribution. Skill — an ability and competence learned by practice. Special causes — causes of variation that arise because of special circumstances. They are not an inherent part of a process. Special causes are also referred to as assignable causes (also see common causes). Specification (of a product) — a listing of the required properties of a product. The specifications may include the desired mean and/or tolerances for certain dimensions or other measurements, the color or texture of surface finish, or any other properties that define the product. Specification (time management) — an information vehicle that provides a precise description of a specific physical item, procedure, or result for the purpose of purchase and/or implementation of the item or service (contract/procurement management); written, pictorial, or graphic information that describes, defines, or specifies services or items to be procured. Specification control — a system for assuring that project specifications are prepared in a uniform fashion and only changed with proper authorization. Sporadic problem — a sudden adverse change in the status quo that can be remedied by restoring the status quo. For example, actions such as changing a worn part or proper handling of an irate customer’s complaint can restore the status quo.
SL316XCh16GlossaryFrame Page 526 Monday, September 30, 2002 8:02 PM
526
Six Sigma and Beyond: The Implementation Process
Stabilization — the period of time between continuous operation and normal operation. This period encompasses those activities necessary to establish reliable operation at design conditions of capacity, product quality, and efficiency. Staff personnel — those individuals working in departments that are not directly involved in an organization’s mainstream activity but rather perform advising, counseling, and assisting duties for the line/functional departments. Stage — see project stage. Stakeholders — people, departments, and organizations that have an investment or interest in the success or actions taken by an organization. Standard (measurement) — a reference item providing a known value of a quantity to be measured. Standards may be primary — i.e., the standard essentially defines the unit of measure — or secondary (transfer) standards, which are compared to the primary standard (directly or by way of an intermediate transfer standard). Standards are used to calibrate instruments that are then employed to make routine measurements. Standard procedure — prescribes that a certain kind of work be done in the same way wherever it is performed. Standard proposal schedule — a preestablished network on file. Start-up — that period after the date of initial operation during which the unit is brought up to acceptable production capacity and quality. Start-up is the activity that is often confused (used interchangeably) with date of initial operation. Standard — a statement, specification, or quantity of material against which measured outputs from a process may be judged as acceptable or unacceptable. A basis for the uniformity of measuring performance. Also, a document that prescribes a specific consensus solution to a repetitive design, operating, or maintenance problem. Standard deviation — a calculated measure of variability that shows how much the data are spread around the mean. It is shown as the lower case of sigma of the Greek alphabet as σ for the population and s for samples. A measure of the variation among the members of a statistical sample. Statistic — an estimate of a population parameter using a value calculated from a random sample. Statistical confidence — (also called “statistical significance”) the level of accuracy expected of an analysis of data. Most frequently, it is expressed as either a “95% level of significance,” or “5% confidence level.” Statistical inference — the process of drawing conclusions on the basis of statistics. Statistical thinking — a philosophy of learning and action based on three fundamental principles: 1. All work occurs in a system of interconnected processes 2. Variation exists in all processes 3. Understanding and reducing variation are vital to improvement
SL316XCh16GlossaryFrame Page 527 Monday, September 30, 2002 8:02 PM
Glossary
527
Statistics — the mathematical methods used to determine the best range of probable values for a project and to assess the degree of accuracy or allowance for unpredictable future events such as accidents, technological innovations, strikes, etc. that can occur during a project’s life. The techniques that can be used are risk analysis with Monte Carlo simulation, confidence levels, range analysis, etc. Status — the condition of a project at a specified point in time. Statusing — indicating most current project status. Status system — system for tracking status at lowest level of detail. Stop work order — request for interim stoppage of work due to nonconformance or funding or technical limitations. Strategic plan — the target plan prioritized by critical total float from the current schedule. Strategy — a framework guiding those choices that determine the nature and direction to attain an objective. Stratification (of a sample) — if a sample is formed by combining units from several lots having different properties, the sample distribution will show a concentration or clumping about the mean value for each lot; this is called stratification. In control charting, if there are changes between subgroups due to stratification, the R-chart points will all tend to be near the centerline. Stratified random sampling — a technique to segment (stratify) a population prior to drawing a random sample from each stratum, the purpose being to increase precision when members of different strata would, if not stratified, cause an unrealistic distortion. Structural variation — variation caused by regular, systematic changes in output such as seasonal patterns and long-term trends. Study — the methodical examination or analysis of a question or problem. Subnet — the subdivision of a network into fragments, usually representing some form of subproject. Subgroup — for control charts: a sample of units from a given process, all taken at or near the same time. Substantial completion — the point in time when the work is ready for use or is being used for the purpose intended and is so certified. Suboptimization — the need for each business function to consider overall organizational objectives, resulting in higher efficiency and effectiveness of the entire system, although performance of a function may be suboptimal. Successor activity — any activity that exists on a common path with the activity in question and occurs after the activity in question. Summary schedule — a single page, usually time-scaled, project schedule; typically included in management level progress reports. Also known as milestone schedule. Summative quality evaluation — the process of determining what lessons have been learned after a project is completed. The objective is to document which behaviors helped determine, maintain, or increase quality standards and which did not (for use in future projects).
SL316XCh16GlossaryFrame Page 528 Monday, September 30, 2002 8:02 PM
528
Six Sigma and Beyond: The Implementation Process
Supplementary agreement — contract modification which is accomplished by the mutual action of parties. Supplementary conditions — modifications, deletions, and additions to standard general conditions developed for particular goods/services. Supplementary information — identification and collection of additional information from supplementary sources and its review and analysis. Supplier default — failure on the part of a supplier to meet technical or delivery requirements of a contract. Supplier expediting — actions taken to ensure that the goods/services are supplied in accordance with the schedule documented in the contract. Supplier ranking — qualitative or quantitative determinations of prospective suppliers’ qualifications relative to the provision of the proposed goods/services. Survey — an examination for some specific purpose; careful inspection or consideration; detailed review (survey implies the inclusion of matters not covered by agreed-upon criteria). Also, a structured series of questions designed to elicit a predetermined range of responses covering a preselected area of interest. May be administered orally by a survey taker, by paper and pencil, or by computer. Responses are tabulated and analyzed to identify significant areas for change. SWOT analysis — an assessment of an organization’s key strengths, weaknesses, opportunities, and threats. It considers factors such as the organization’s industry, its competitive position, functional areas, and management. System — a methodical assembly of actions or things forming a logical and connected scheme or unit. Systematic variation (of a process) — variations that exhibit a predictable pattern. The pattern may be cyclic (i.e., a recurring pattern) or may progress linearly (trend). t-distribution — for a sample with size n, drawn from a normally distributed population, with mean X-bar and standard deviation s. The true population parameters are unknown. The t-distribution is expressed as a table for a given number of degrees of freedom and a risk. As the degrees of freedom get very large, it approaches a z-distribution. t-test — a test of the statistical hypothesis that two population means are equal. The population standard deviations are unknown but thought to be the same. The hypothesis is rejected if the t value is outside the acceptable range listed in the t-table for a given risk and degrees of freedom. Take-off — a term used for identifying and recording from drawings the material and quantities required for estimating the time and cost for the completion of an activity. Target date — the date an activity is desired to be started or completed; accepted as the date generated by the initial CPM schedule operation and resource allocation process.
SL316XCh16GlossaryFrame Page 529 Monday, September 30, 2002 8:02 PM
Glossary
529
Target project plan — the target plan prioritized by critical total float from the current schedule. Target reporting — a method of reporting the current schedule against some established baseline schedule and the computation of variances between them. Task types — characterization of tasks by resource requirement, responsibility, discipline, jurisdiction, function, etc. Team building — the process of influencing a group of diverse individuals, each with his or her own goals, needs, and perspectives, to work together effectively for the good of a project such that their team will accomplish more than the sum of their individual efforts could otherwise achieve. Team decision-making — the process by which the project manager and his team determine feasible alternatives in the face of a technical, psychological, or political problem and make a conscious selection of a course of action from among these available alternatives. Team members — the individuals reporting either part-time or full-time to the project manager who are responsible for some aspect of a project’s activities. Team motivation — the process by which the project manager influences his project team to initiate effort on project tasks, expend increasing amounts of effort on those tasks, and persist in expending effort on these tasks over the period of time necessary for project goal accomplishment. Team reward system — the process by which the project team receives recognition for its accomplishments. Technical quality administration — the technical process of establishing a plan for monitoring and controlling a project’s satisfactory completion. This plan also includes policies and procedures to prevent or correct deviations from quality specifications/requirements. Technical quality specifications — the process of establishing specific project requirements including execution criteria and technologies, project design, measurement specifications, and material procurement and control that satisfy the expectations of the client, shareholders and project team. Technical quality support — the process of providing technical training and expertise from one or more support groups to a project in a timely manner. Effects of these groups could generate considerations for future client needs or warranty services. Technical specifications — documentation that describes, defines or specifies the goods and services to be supplied. See also specifications. Termination (phase) — the fourth and final phase in the generic project life cycle. Also known as final or close-out phase. Tied activity — an activity that must start within a specified time or immediately after its predecessor’s completion. Time delay claim — a request for an extension to the contract dates.
SL316XCh16GlossaryFrame Page 530 Monday, September 30, 2002 8:02 PM
530
Six Sigma and Beyond: The Implementation Process
Time-limited scheduling — the scheduling of activities so predetermined resource availability pools are not exceeded unless the further delay will cause the project finish date to be delayed. Activities can be delayed only until their late start date. However, activities will begin when the late start date is reached, even if resource limits are exceeded. networks with negative total float time should not be processed by time-limited scheduling. Time management — the function required to maintain appropriate allocation of time to the overall conduct of a project through the successive stages of its natural life cycle, (i.e., concept, development, execution, and termination) by means of time planning, time estimating, time scheduling, and schedule control. Time periods — comparing calculated time vs. specified time in relation to constraints and time-span objectives. Tolerance — the permissible range of variation in a particular dimension of a product. Tolerances are often set by engineering requirements to ensure that components will function together properly. Top management — from the viewpoint of the project manager, top management includes the individual to whom he or she reports on project matters and other managers senior to that individual. Total float (TF) — the amount of time (in work units) that an activity may be delayed from its early start without delaying the project finish date. Total float is equal to the late finish or the late start minus the early start of the activity. Transmit — to send or convey from one person or place to another. Trend — a gradual, systematic change with time or other variable. Trend analyses — mathematical methods for establishing trends based on past project history allowing for adjustment, refinement, or revision to predict cost. Regression analysis techniques can be used for predicting cost and schedule trends using data from historical projects. Trend monitoring — a system for tracking the estimated cost, schedule, and resources of a project vs. those planned. Trend reports — indicators of variations of project control parameters against planned objectives. Trending — the review of proposed changes in resources allocation and the forecasting of their impact on budget. To be effective, trending should be regularly performed and the impacts on budget plotted graphically. Used in this manner, trending supports a decision to authorize a change. Type I error — in control chart analysis: concluding that a process is unstable when, in fact, it is stable. Type II error — in control chart analysis: concluding that a process is stable when, in fact, it is unstable. Uncertainty — lack of knowledge of future events. See also project risk. Uniform distribution — a type of distribution in which all outcomes are equally likely.
SL316XCh16GlossaryFrame Page 531 Monday, September 30, 2002 8:02 PM
Glossary
531
Unit — a discrete item (lamp, invoice, etc.) that possesses one or more CTQs. (Note: “Units” must be considered with regard for the specific CTQs of concern by a customer or for a specific process.) Unit of measure — the smallest increment a measurement system can indicate. See also resolution. Universe — see population. Unit price (UP) contract — a fixed price contract whereby the supplier agrees to furnish goods or services at unit rates and the final price is dependent on the quantities needed to carry out the work. UP — see unit price contract. Update — to revise a schedule to reflect the most current information on a project. Validation — confirmation by examination of objective evidence that specific requirements or a specified intended use is met. Validity — the ability of a feedback instrument to measure what it was intended to measure. Value-added — refers to tasks or activities that convert resources into products or services consistent with customer requirements. The customer can be internal or external to the organization. Value analysis, value engineering, and value research (VA, VE, VR) — a n activity devoted to optimizing cost performance; the systematic use of techniques that identify the required functions of an item, establish values for those functions, and provide the functions at the lowest overall cost without loss of performance (optimum overall cost). Value analysis assumes that a process, procedure, product, or service is of no value unless proven otherwise. It assigns a price to every step of a process and then computes the worth-to-cost ratio of that step. VE points the way to elimination and reengineering. Value research (related to value engineering) for given features of the service or product helps determine the customers’ strongest “likes” and “dislikes” and those for which customers are neutral. Focuses attention on strong dislikes and enables identified “neutrals” to be considered for cost reductions. Variables — quantities that are subject to change or variability. Variable data — data resulting from the measurement of a parameter or variable as opposed to attributes data. A dimensional value can be recorded and is only limited in value by the resolution of the measurement system. Control charts based on variables data include average (X-bar) chart, individuals (X) chart, range (R) chart, sample standard deviation (s) chart, and CUSUM chart. Variable sampling plan — a plan in which a sample is taken and a measurement of a specified quality characteristic is made on each unit. The measurements are summarized into a simple statistic, and the observed value is compared with an allowable value defined in the plan. Variability — the property of exhibiting variation, i.e., changes or differences, in particular, in the product of a process.
SL316XCh16GlossaryFrame Page 532 Monday, September 30, 2002 8:02 PM
532
Six Sigma and Beyond: The Implementation Process
Variance — in statistics, the square of the standard deviation. Any actual or potential deviation from an intended or budgeted figure or plan. A variance can be the difference between intended and actual time. Any difference between the projected duration of an activity and the actual duration of the activity. Also, the difference between projected start and finish dates and actual or revised start and finish dates. Variance analysis — the analysis of the following: 1. Cost Variance = BCWP – ACWP 2. %Over/Under = ACWP – BCWP × 100 BCWP 3. Unit Variance Analysis a. Labor Rate b. Labor Hours/Units of Work Accomplished c. Material Rate d. Material Usage 4. Schedule/Performance = BCWP – BCWS Variance reports — documentation of project performance about a planned or measured performance parameter. Variation — a change in data, a characteristic, or a function that is caused by one of four factors: special causes, common causes, tampering, or structural variation. Verification — the act of reviewing, inspecting, testing, checking, auditing, or otherwise establishing and documenting whether items, processes, services, or documents conform to specified requirements. Verbal bid — undocumented quotation by telephone or other verbal means of communication. Vital few, useful many — a term used by J. M. Juran to describe his use of the Pareto principle, which he first defined in 1950. (The principle was used much earlier in economics and inventory-control methodologies.) The principle suggests that most effects come from relatively few causes; that is, 80% of the effects come from 20% of the possible causes. The 20% of the possible causes are referred to as the “vital few;” and the remaining causes are referred to as the “useful many.” When Juran first defined this principle, he referred to the remaining causes as the “trivial many,” but realizing that no problems are trivial in quality assurance, he changed it to “useful many.” Voice of the customer — an organization’s efforts to understand the customers’ needs and expectations (“voice”) and to provide products and services that truly meet such needs and expectations. Walk the talk — means not only talking about what one believes in but also being observed acting out those beliefs. Employee buy-in of the concept is more likely when management is seen as committed and involved in the process every day.
SL316XCh16GlossaryFrame Page 533 Monday, September 30, 2002 8:02 PM
Glossary
533
Waste — activities that consume resources but add no value; visible waste (for example, scrap, rework, downtime) and invisible waste (for example, inefficient setups, wait times of people and machines, inventory). It is customary to view waste as any variation from target. WBS — see work breakdown structure. Weibull distribution — a distribution of continuous data that can take on many different shapes and is used to describe a variety of patterns; used to define when the infant mortality rate has ended and a steady state has been reached (decreasing failure rate); relates to the “bathtub” curve. Wisdom — the culmination of the continuum from data to information to knowledge to wisdom. Work acceptance — work is considered accepted when it is conducted, documented, and verified as per acceptance criteria provided in the technical specifications and contract documents. Work analysis — the analysis, classification, and study of the way work is done. Work may be categorized as value-added (necessary work) or nonvalue-added (rework, unnecessary work, idle). Collected data may be summarized on a Pareto chart, showing how people within the studied population work. The need for and value of all work are then questioned and opportunities for improvement identified. A time use analysis may also be included in the study. Work authorization — the process of sanctioning all project work. Work authorization/release — in cases where work is to be performed in segments due to technical or funding limitations, work authorization/release authorizes specified work to be performed during a specified period. Work breakdown structure (WBS) — A task-oriented “family tree” of activities that organizes, defines, and graphically displays the total work to be accomplished in order to achieve the final objectives of a project. Each descending level represents an increasingly detailed definition of the project objective. It is a system for subdividing a project into manageable work packages, components, or elements to provide a common framework for scope/cost/schedule communications, allocation of responsibility, monitoring, and management. Work group — a group composed of people from one functional area who work together on a daily basis and whose goal is to improve the processes of their function. Work packages/control point — WBS elements of the project isolated for assignment to “work centers” for accomplishment. Production control is established at this element level. Work plan — “Designer’s” schedule plan, budget, and monitoring system utilized during the design stage. Work unit — a calendar time unit when work maybe performed on an activity. Working calendar — the total calendar dates which cover all project activities, from start to finish. World-class quality — a term used to indicate a standard of excellence: best of the best.
SL316XCh16GlossaryFrame Page 534 Monday, September 30, 2002 8:02 PM
534
Six Sigma and Beyond: The Implementation Process
Workload — review of planned work demand on resources over time spans vs. acceptable limits and their availability. Yield — ratio between salable goods produced and the quantity of raw materials or components put in at the beginning of a process. z-distribution — for a sample size of n drawn from a normal distribution with mean p and standard deviation σ. Used to determine the area under the normal curve. z-test — a test of a statistical hypothesis that the population mean X-bar is equal to the sample mean µ when the population standard deviation is known. Zmax/3 — the greater result of the formula when calculating Cpk. It shows the distance from the tail of the distribution to the specification that shows the greatest capability. Zmin/3 — See Cpk.
SL316XCh17BibFrame Page 535 Monday, September 30, 2002 8:02 PM
Selected Bibliography Agresti, A. (2000). An introduction to categorical data analysis. John Wiley & Sons. New York. Agresti, A. (1999).Categorical data analysis. John Wiley & Sons. New York. Ainsworth, M. and J. T. Oden. (2000). A posteriori error estimation in finite element analysis. John Wiley & Sons. New York. Banner, J. M. and H. C. Cannon. (2000). The elements of teaching. Yale University Press. New Haven. Banner, J. M. and H. C. Cannon. (2000). The elements of learning. Yale University Press. New Haven. Blischke, W. R. and P. Murphy. (2000). Reliability: modeling, prediction, and optimization. John Wiley & Sons. New York. Breyfogle, F. (1998) Implementing Six Sigma: smarter solutions using statistical methods. John Wiley & Sons. New York. Chatterjee, S., A. S. Hadi, and B. Price. (2000). Regression analysis by example. John Wiley & Sons. New York. Chen, Z. (2001). Data mining and uncertain reasoning. John Wiley & Sons. New York. Chong, E. and S. Zak (2001). An introduction to optimization. John Wiley & Sons. New York. Concover, W. J. (2000). Practical nonparametric statistics. John Wiley & Sons. New York. Congdon, P. (2001). Bayesian statistical modeling. John Wiley & Sons, New York. Cook, R. D. and S. Weisberg. (2000). Applied regression including computing and graphics. John Wiley & Sons. New York. Cressle, N. A. (2000). Statistics for spatial data. Rev. ed. John Wiley & Sons. New York. Draper, N. R. and H. Smith. (1999). Applied regression analysis. John Wiley & Sons. New York. Draman, R. H. and S. S. Chakravorty. (2000). An evaluation of quality improvement project selection alternatives. Quality Management Journal. Volume 7. Issue 1. pp. 58–73. Dusharme, D. (November 2001). Six Sigma survey: breaking through the Six Sigma hype. Quality Digest. pp. 27–32. Fletcher, R. (2000). Practical methods of optimization. 2nd ed. John Wiley & Sons. New York. Franko, V. R. (June 2001). Adopting Six Sigma. Quality Digest. Pp. 28–32. Freund, R. J. and R. C. Littell. (2000). SAS system for regression. 3rd ed. John Wiley & Sons. New York. Gustafsson, A., F. Ekdahl, K. Falk, and M. Johnson. (2000). Linking customer satisfaction to product design: a key to success for Volvo. Quality Management Journal. Volume 7. Issue 1. Pp. 27–38. Haimes, Y. Y. (2000). Risk modeling, assessment, and management. John Wiley & Sons. New York. Harrington, H. J. (June 2001). Does Six Sigma implementation really yield near perfect results? Quality Digest. P. 16. Hauser, J. R. and D. Clausing. (May 1, 1988). The house of quality. Harvard Business Review. Product 88307. Hoaglin, D. C., F. Mosteller and J. W. Tukey. (2000). Understanding robust and exploratory data analysis. John Wiley & Sons. New York.
535
SL316XCh17BibFrame Page 536 Monday, September 30, 2002 8:02 PM
536
Six Sigma and Beyond: The Implementation Process
Hollander, M. and D. A. Wolfe. (2000). Nonparametric statistical methods. John Wiley & Sons. New York. Hosmer, D. and S. Lemeshow. (1998). Applied logistic regression. John Wiley & Sons. New York. Johnson, R. A. and K. Tsui. (2000). Statistical reasoning and methods. John Wiley & Sons. New York. Kendall, D. G., D. Barden, and T.K. Carne. (2000). Shape and shape theory. John Wiley & Sons. New York. Kezsbom, D. S. and K. A. Edward. (2001). The new dynamic project management. John Wiley & Sons. New York. Kirby, M. (2000). Geometric data analysis. John Wiley & Sons. New York. Levy, P. and S. Lemeshow (2000). Sampling of populations: methods and applications. John Wiley & Sons. New York. Lukas, J. M. (January 2002). The essential Six Sigma. Quality Progress. Pp. 27–32. Mardia, K. and P. Jupp. (2000). Directional statistics. John Wiley & Sons. New York. McCullock, C. E. and S. R. Searle. (1999). Generalized, linear, and mixed models. John Wiley & Sons. New York. McLachian, G. and D. Peel. (2000). Finite mixture models. John Wiley & Sons. New York. Meeker, W. (1999). Statistical methods for reliability data. John Wiley & Sons. New York. Miller, R. E. (2000). Optimization: foundations and applications. John Wiley & Sons. New York. Montgomery, D., E. A. Peck, and G. G. Vining. (2001). Introduction to linear regression analysis. 3rd ed. John Wiley & Sons. New York. Montgomery, D. C. (1999) Design and analysis of experiments. 5th ed. John Wiley & Sons. New York. Myers, R. H., D. C. Montgomery, and G. G. Vining. (2001). Generalized linear models. John Wiley & Sons. New York. Pearson, T. A. (February 2001). Measure for Six Sigma success. Quality Progress. Pp. 35–42. Ponniah, P. (2001). Data warehousing fundamentals. John Wiley & Sons. New York. Pourahmadi, M. (2001). Foundations of time series analysis and prediction theory. John Wiley & Sons. New York. Qualsoft (2000). Quality Function Deployment – software. Qualsoft, LLC. Birmingham, MI. Rencher, A. C. (2000). Linear models in statistics. John Wiley & Sons. New York. Rigton, S. E. and A. P. Basu. (2000) Statistical methods for the reliability of repairable systems. John Wiley & Sons. New York. Robinson, G. K. (2000). Practical strategies for experimenting. John Wiley & Sons. New York. Ryan, T. P. (2000). Statistical methods for quality improvement. 2nd ed. John Wiley & Sons. New York. Ryan, T. P. (2000). Modern regression methods. John Wiley & Sons. New York. Saltelli, A., K. Chan, and E. M. Scott. (2000) Sensitivity analysis. John Wiley & Sons. New York. Schimek, M. G. (2000). Smoothing and regression. John Wiley & Sons. New York. Shepard, L. A. (October 2000). The role of assessment in a learning culture. Educational Researcher. Pp. 4–14. Thompson, J. R. (2000). Simulation: a modeler’s approach. John Wiley & Sons. New York. Treichler, D., R. Carmichael, A. Kushmanoff, J. Lewis, and G. Berthiez. (January 2002). Design for Six Sigma: 15 lessons learned. Quality Progress. Pp. 33–43. Valliant, R., A. Dorfman, and R. Royall. (2000). Finite population sampling and inference: a prediction approach. Wiley. New York.
SL316XCh17BibFrame Page 537 Monday, September 30, 2002 8:02 PM
Selected Bibliography
537
Wild, C. and G. F. Seber. (2000). Chance encounters: a first course in data analysis and inference. John Wiley & Sons. New York. Wolkenhauer, O. (2001). Data engineering: fuzzy mathematics in systems theory and data analysis. John Wiley & Sons. New York. Wu, C. F. J. and M. Hamada. (1998). Experiments: planning, analysis and parameter design optimization. John Wiley & Sons. New York.
SL316XCh17BibFrame Page 538 Monday, September 30, 2002 8:02 PM
SL316X_frame_INDEX_V. VII Page 539 Monday, September 30, 2002 8:21 PM
Index A
risk analysis tools in, 213 robust design tools in, 212 simulation tools in, 207 Six Sigma deployment in, 203 Six Sigma fundamentals in, 200–202 Six Sigma projects in, 203 Six Sigma statistics analysis in, 206 static statistics in, 205 statistical distributions in, 204 statistical hypotheses in, 207–208 technical week 1, 237–252 week 2, 252–257 week 3, 257–263 week 4, 263–281 tolerance tools in, 212 transactional week 1, 215–226 week 2, 226–228 week 3, 228–233 week 4, 233–237 variable definition in, 202
Accuracy, 249 Accuracy-related terms, 249, 293 Adaptive sequential experiments, 443–444 Analysis phase, 219–221, 242–243, 286–287 Analytical robustness, 441–442 ANOVA, one-way, 257–258 Attribute measurement system, 248, 292 Attribute R&R, 248–249, 292
B Bias average, 249 Black belt prerequisites for, 240, 284 roles of, 239–240 Black belt execution strategy, 217–218, 239, 240–241, 283–284 Black belt training, 199 business metrics in, 200 continuous decision tools in, 208–209 continuous statistical process control tools in, 214 CTX tree definition in, 202 customer focus in, 199 data collection in, 204 defect opportunities definition in, 202 DFSS in, 213, 421–453 diagnostic tools in, 207 discrete decision tools in, 209–210 discrete statistical process control tools in, 214 dynamic statistics in, 205 experimental design tools in, 210–212 manufacturing week 1, 281–296 week 2, 296–299 week 3, 299–303 week 4, 303–321 measure scales in, 204 measurement error in, 204 outline of, 214 precontrol tools in, 213 process baseline determination in, 203 process mapping in, 203 process metrics analysis in, 206–207
C Calibration steps, 251 Capability, poor gauge, 250–251, 294–295 Capability analysis, 242, 248, 286, 291–292 Cascading, 132, 145, 429–430 Central limit theorem, 256, 298 Champion, roles of, 240, 284 Champion training, 117 curriculum objectives in, 118–128 DFSS, 416–421 manufacturing changes that last, 168–169 commitment mobilization, 167–168 content format, 153, 155–165 managing change, 166 need for change, 166–167 project presentations, week 1–4, 174–177 project reporting, 169–174 project review questions, 177–179 values exploration, 153–154 vision shaping, 167 technical, content format, 140–141
539
SL316X_frame_INDEX_V. VII Page 540 Monday, September 30, 2002 8:21 PM
540
Six Sigma and Beyond: The Implementation Process
analysis, 152–153 basic graphing, 152 basic statistical analysis, 151–152 cause and effect understanding, 151 control, 153 customer focus, 148 customer identification, 145–146 DMAIC process, 146–148 DPM conversion to Z equivalent, 152 goals, 141–142 high-level problem statement development, 146 improvement, 153 model explanation/organizational values, 148 performance metrics, 148 process baseline determination, 149–150 process mapping, 150 Six Sigma focus, 149 Six Sigma overview, 142–145 variable definition, 148–149 transactional, content format, 128–129 analysis, 140 basic statistical concepts, 138–139 cause and effect understanding, 138 control, 140 customer focus, 135 customer identification, 132–133 DMAIC process, 133–135 DPM conversion to Z equivalent, 139 goals, 129 graphing basics, 139 high-level problem statement, 133 improvement, 140 model explanation/organizational values, 135 performance metrics reporting, 135 process baseline determination, 136–137 process mapping, 137–138 Six Sigma focus, 136 Six Sigma overview, 129–132 variable definition, 136 Change management process, 228 Characterize phase, 431–438 Compatibility, 5 Complexity, 5, 222, 245, 289 Concept generation, 435–437 Confidence intervals, 256, 298 Connections rules, 223, 245, 289 Content outline, 42 Contract training design of instruction in, 83–84 evaluation in, 88 front end analysis in, 81–82 instructional materials delivery in, 88–90
instructional materials development in, 85–88 job aid design in, 84–85 on-the-job, 90 pilot testing in, 88 post instructional evaluation of, 90–91 task analysis in, 82–83 Control chart systems, 271–273, 312–313 Control methods, 269–270, 310 Control phase, 222–226, 243, 287 Control plans, 235, 270, 277–281, 310, 318–321 Correlation, 250 and simple linear regression, 255, 297–298 Correlation limitations, 266–267, 307 Cost of poor quality, 216 CTQ identification, 435 CTSs, formulating, 431 Customer focus, 429
D Data, 224–226 Data collection, 27, 242 Data set, 247, 291 DCOV model characterize phase of, 431–438 define phase of, 429–431 optimize phase in, 438–447 verify phase of, 447–453 Decision cycle, organizational, 22 Defects, 216 Define phase, 218–219, 241, 429–431 Deliverables checklist, 446 Design of experiment, 221–222, 228–231, 243, 258–263, 300–303 definition of, 258 Design of instruction, 41 in contract training, 83–84 delivery systems in, 16–17 front end analysis in, 25–39 preparation for, 41 principles of, 14 stages/phases of, 14–20, 41–48 Design verification plan, and record, 452–453 Deterministic analysis, 441 DFSS methodology, 411–412 DFSS training, 407 for black belt, 421–453 for champions, 416–421 for executives, 408–415 levels of, 407–408 for project member, 421–453 Diffusion, 3–4 Discrimination, 250 DMAIC model, 341
SL316X_frame_INDEX_V. VII Page 541 Monday, September 30, 2002 8:21 PM
Index for Volume VII
analysis phase in, 344 control phase in, 345–350 control plans in, 357–361 definition phase in, 341 design of experiments in, 352–353 hypothesis testing in, 351 improvement phase in, 345 manufacturing training in, 368–391 measurement phase in, 341–342 measurement systems analysis in, 343 parameters vs. statistics in, 351–352 screening designs in, 353–357 technical training in, 362–368
E Essential task information, 36 Evaluation in contract training, 88, 90–91 formative, 21, 48 of learning outcomes, 21 post-instructional, 73–79, see also Postinstructional evaluation Evolutionary operations, 265, 305–306 Executive training, 95 DFSS, 408–415 objectives in, 95–102 one day content outline in, 102–103 Six Sigma leadership, 103–105 two day content outline in, 105–116 analysis phase, 112–113 control phase, 114–115 customer satisfaction, 106–107 customer–supplier relationship, 106 defect models, 106 defect reduction, 115–116 define phase, 110–111 improvement phase, 113–114 measure phase, 111–112 measurement, 105 organizational profitability, 106–107 process characterization, 106 roles and responsibilities, 107–109 Six Sigma breakthrough, 109–110 yield, 106 Executives, roles of, 240, 284 Experimental design, 228–231 Experimentation, analytical and physical, 441 Experiments, adaptive sequential, 443–444
541
F F-test, 257 Factorial experiments, 230–231, 302–303 Failure modes, and effects analysis, 221, 242–243 Failure probability, quantifying, 445 Failure resolution plan, 451–452 5S workplace organization, 274–277, 314–316 Formative evaluation, 21 Formative evaluation checklist, 48 Fractional factorials, 263–277, 304–311 Front end analysis, 25–31 in contract training, 81–82 formative evaluation checklist for, 32, 38 reporting results of, 31 task analysis in, 31–39 Function modeling, 432–433
G Graphical methods, 228 Green belt, roles of, 240, 284 Green belt training, 323 business metrics in, 323–324 continuous decision tools in, 332–333 CTX tree in, 326 customer focus in, 323 data collection in, 328 defect opportunities definition in, 326 diagnostic tools in, 331 discrete decision tools in, 333–334 dynamic statistics in, 329 experiment design tools in, 334–335 manufacturing, 368–391 measure scales in, 327–328 measurement error in, 328 precontrol tools in, 336 process baselines in, 327 process mapping in, 326–327 process metrics in, 330–331 simulation tools in, 331 Six Sigma deployment in, 327 Six Sigma fundamentals in, 324–326 Six Sigma projects in, 327 Six Sigma statistics in, 329–330 static statistics in, 328–329 statistical distributions in, 328 statistical hypotheses in, 331–332 statistical process monitoring in, 336–337 technical, 362–368 transactional, 337–341 DMAIC model in, 341–362 variables definition in, 326
SL316X_frame_INDEX_V. VII Page 542 Monday, September 30, 2002 8:21 PM
542
Six Sigma and Beyond: The Implementation Process
H Hidden factory, 245, 289 Hypotheses formulating, 257 testing, 227–228, 256, 299
I Ideal function, 433–434 Improvement capability, 216–217 Improvement fundamentals, 246, 290 Improvement phase, 221–222, 243, 287 Innovations, characteristics of, 4–5 Inputs, 253–254 attributive, 296 Instruction, see also Training for black belts, 199–321 for champions, 117–179 design of, 14–20, 41–48 DFSS, 407–453 elements of, 45, 47 evaluation of, 73–79 events in, and learning skills, 12–13, see also Learning for executives, 95–116 follow-up of training and, 72–73 formative evaluation checklist for, 71 general orientation, 393–403 for green belts, 323–391 for master black belts, 181–197 materials development for, 51–73, see also Material(s), instructional on-the-job application of, 69–72 plan for, 43–44 Instructors, motivating, 19–20 Interval scale, 224–225
J Job aid design, in contract training, 84–85
K Kaizen, 275–277, 316–317 Kirkpatrick’s Hierarchy of Evaluation, 21–23 examples of, 24
L Latin hypercube sampling, 442 Lean manufacturing, 273–277, 314–316
Learning active role in, 9 adult, understanding, 10–14 capabilities of, 19–20 verbs describing, 19 conditions of, 15–20 outcomes of, 21 desirable sequence characteristics associated with, 21 types of, 20–24 preparation for, 8–9 principles of, 46–47 skills used in, and instructional events, 12–13 successful, 7–8 Limit state, 444 Linear regression, uses of, 267, 307–308 Linearity, 250
M Maintenance, 236 Management style, evolution of, 268–269, 309 Master black belt, roles of, 107, 240, 284 Master black belt training, 181 additional requirements in, 196–197 business metrics in, 181–182 continuous decision tools in, 190–191 continuous statistical process control tools in, 196 CTX tree definition in, 184 customer focus in, 181 data collection in, 186 defect opportunities definition in, 184 DFSS principles in, 195 diagnostic tools in, 189 discrete decision tools in, 191–192 discrete statistical process control tools in, 196 dynamic statistics in, 187 experimental design tools in, 192–194 measure scales in, 186 measurement error in, 186 precontrol tools in, 195 process baseline determination in, 185 process mapping in, 184–185 process metrics in, 188–189 risk analysis tools in, 195 robust design tools in, 194 simulation tools in, 189 Six Sigma deployment in, 185 Six Sigma fundamentals in, 182–184 Six Sigma projects in, 185 Six Sigma statistics analysis in, 188 static statistics in, 187 statistical distributions in, 186
SL316X_frame_INDEX_V. VII Page 543 Monday, September 30, 2002 8:21 PM
Index for Volume VII
statistical hypotheses in, 189–190 tolerance tools in, 194 variables definition in, 184 Material(s), instructional delivery of, 63–79 in contract training, 88–90 formative evaluation checklist for, 68 implementing, 66–68 planning, 63–66 design of, 14–20, see also Design in contract training, 83–84 delivery systems in, 16–17 front end analysis in, 25–39 preparation for, 41 principles of, 14 stages/phases of, 14–20, 41–48 development of, 51–60 in contract training, 85–88 evaluation in, 55–57 formative evaluation checklist for, 55, 60 implementation in, 52–55 pilot testing in, 57–60 planning in, 51–52 rough drafts in, 52–55 on-the-job application of, 69–72, see also Training evaluation of, 73–79 follow-up of, 72–73 formative evaluation checklist for, 71 Mathematical programming, 439–446 Measurement phase, 219–220, 241–242, 285–286 Measurement system analysis, 242, 251–252, 286, 295 Measurement variation, sources of, 249, 293 Media, instructional, 44–45 Metric identification, 434–438 Mistake-proofing, 277, 317–318 Most probable point, 444 Multi-vari charts, 296–297 Multi-vari, definition of, 296 Multi-vari analysis, 254 Multi-vari studies, 243, 253 Multiple regression, 267, 308 Multivariate adaptive regression spline, 442–443
543
O Observability, 5 Opportunity count, 222 Optimization planning, 437–438 Optimize, definition of, 438–439 Optimize phase, 438–447 Ordinal scale, 224 Organization, Six Sigma diffusion in, 5–6 Orientation, general, 393–403 Outputs, 253–254
P P-diagram, 433–434 Parameters vs. statistics, 256–257 Parametric CIs, 256 Participants, expectations for, 7 Payoff, organizational, 23 Per cent contribution, 444 Pilot testing, see Evaluation Plan, of instruction, 43–44 Plant experimentation, 263–265, 304–305 Poka Yoke, 277, 317 Population parameters vs. sample statistics, 247, 291 Population vs. sample, 256 Post-instructional data, collection tools, 75–76 Post-instructional evaluation, 73–79 in contract training, 90–91 formative evaluation checklist for, 79 Precision, 249 Precision-related terms, 249, 293–294 Precision-to-tolerance ratio, 242, 250, 294 Precision-to-total-variation ratio, 242, 250, 294 Precontrol charts, 272, 313 Primary metric, 219, 241 Problem-solving front end analysis, 26–31 Problems, contributing factors to, 28 Process data, 442–444 Process mapping, 224 Producibility, design for, 446 Product evaluation, 83–84 Project benefits, 219, 241
N National Assessment of Educational Progress (NAEP), 458 Noise, 440 Nominal scale, 224 Nonvalue-add rules, 223, 245, 289 Normal distribution, 247, 291 Normalizing yield, 137, 150
Q QFD matrix, 434 QFD planning tool, 430 Quality system, 267–268, 270, 308–311 Quality system overview, 235–237 Quantile–quantile relationships, 444
SL316X_frame_INDEX_V. VII Page 544 Monday, September 30, 2002 8:21 PM
544
Six Sigma and Beyond: The Implementation Process
R Random sampling, simple, 297 Ratio scale, 225 Realistic tolerancing, 236–237 Red tag technique, 275 Regression, 266, 307 Relative advantage, 4–5 Reliability, 440, see also Robustness Reliability and robustness assessment, 444 Reliability checklist, 445–446 Repeatability, 249 controlling, 251, 295 Reproducibility, 249–250 Research design action plan, 76 Response surface methodology, 265–266, 306–307 Robustness, 440, see also Reliability analytical, 441–442 Robustness checklist, 445–446
S Sampling methods, 255 Sampling plan, 255, 297 Sanity check rule, 223, 245, 289 Scale, categories of, 224–225 Scorecard, 412 Screening experiments, 230 Self-evaluation measurement tool, 76 Shogun, see Master black belt Six Sigma, 4 for black belts, 199–321 certification, 455–457, 459–463 need for, 457–459 for champions, 117–179 characterization of, 239 deployment of, 216–217 diffusion of, in organization, 5–6 for executives, 95–116 front end analysis in, 25–31 for general orientation, 393–403 for green belts, 323–391 implementation of, 6 instruction in, 7–24 instructor’s role in, 240, 284 for master black belts, 181–197 metrics in, 216–217 task analysis in, 31–39 training in, 7–24 SKA model, 11 Software packages, 245–246, 289–290
Specifications, 81–82 Stability, 246, 250, 290 Standard error of mean, 227 Standardized work, 275–276, 316–318 Statistical distributions, 247–248, 290–291 Statistical process control, 270–271, 311–318 Statistical techniques, 225–226, 246–252 Statistical tolerancing, 442–443 Summative evaluation, 21 Supplied components rules, 223, 245, 289
T t-distribution, 298 Task(s), essential, 36 Task analysis, 31–39 in contract training, 82–83 formative evaluation checklist for, 32, 38 Test(s) analyzing, 450 conducting, 449 and failure resolution, 450–452 plan details of, 447–449 Third International Mathematics and Science Study (TIMSS), 458 Tolerancing, statistical, 442–443 Training, 69–72 black belt, 199–321 champion, 117–179 contract, 81–92 DFSS, 407–453 elements of, 45, 47 evaluation of, 73–79 events in, and learning skills, 12–13, see also Learning executive, 95–116 follow-up of, 72–73 formative evaluation checklist for, 71 green belt, 323–391 instructional materials in, 51–60, 63–73 design of, 14–20, 41–48 master black belt, 181–197 plan for, 43–44 Transfer function, 412–413 Trialability, 5 True value, 249
V Variability, 246, 290, 440 Variable gauge R&R, 292
SL316X_frame_INDEX_V. VII Page 545 Monday, September 30, 2002 8:21 PM
Index for Volume VII
Verify phase, 447–453
545
Y Y-function matrix, 434
W
Yield normalizing, 137, 150
Waste elimination, 273–274, 314
true, 137, 150
SL316X_frame_INDEX_V. VII Page 546 Monday, September 30, 2002 8:21 PM
SL316X_frame_INDEX_V. I Page 547 Monday, September 30, 2002 8:21 PM
Index 10% rule, 80 1.5 sigma shift, 95–96, 100 3.4 ppm failures, 98, 100 4P&2M, 119 5/3 status system, 83 5M&P, 119 5S approach, 266–267 80/20 rule, 351 8D problem-solving model, 223
A ABB (Asea Brown Boveri), 104 Acceptable quality level (AQL), definition, 343 Acceptance sampling, definition, 343 Acceptance sampling plan, definition, 343 Accountability and gender differences, 193–194 Accountability gaps, 35 Accuracy, definition, 343 Action plan, 225 definition, 343 Active listening, 276 definition, 343 skills, 276–277 Activity-based management, definition, 343 Activity network diagram (AND), definition, 343 Act phase of TQM, 46–50 Adult learning principles, definition, 343 Affinity chart (diagram), 303 definition, 343 Aggressive behavior, 190, 191 Aid Association for Lutherans, 200 Aiming low, 78–79 A.isl, 323 Alexander the Great, 140 Alignment between organizations, 167 in organizations, 169 of team purpose and goals, 287 Alpha risk, 325–326 Analysis, 6 behaviors, 335 of details, 10 levels, 71–72 tools, 15
Analyze (DMAIC phase), 124–125, 314, 323–324 definition, 343 AND (Activity network diagram), definition, 343 ANOVA table, 327, 329 Antecedent conditions, 183 and change, 144–145 Anti-team behaviors, 226 A. O. Smith, 200 APQP, 101 AQL (Acceptable quality level), 343 Architecture of teams, 201 Armstrong,Lance, 10 “As is” flowchart, 37, 38 Assertive behavior, 190, 191 rights, 178 Attitude, 10, 89, see also Management, attitude adjustment, 236 organization, 5 Attribution, 291–292 A.usl, 323 Authority in teams, 246, 247 Automotive markets, American vs. Japanese, 137 Autonomous work teams, 66
B “Bad luck”, 46 Balanced scorecard, definition, 344 Balance of work and personal needs, 89 Baldridge awards, 50–51 Band-Aid solutions, see Short-term solutions Baseline information, 37 Baseline measures, definition, 344 Bathtub curve, definition, 344 Beginning-growth-fading out, 66 Behavior in displaying power, 190 problems in teams, 204–205 styles in teams, 269 Behavioral change, measurement, 65 Benchmarking, 58, 87, 304 definition, 344 matrix, 53–54 Benefit-cost analysis, definition, 344
547
SL316X_frame_INDEX_V. I Page 548 Monday, September 30, 2002 8:21 PM
548
Six Sigma and Beyond: The Implementation Process
Beta risk, 325–326 Bias, definition, 344 Big Q, Little q, definition, 344 Binomial distribution, 327 Binomial distributions, 322 Black belt, see also Master black belt, Shogun six sigma master candidate qualifications and training, 105–106 as coaches, 85 compensation, 93 definition, 344 place in organization, 93 in six sigma deployment, 321 training, 73–74 Blame, 90 assigning, 23, 26 Blemish, definition, 344 Block diagram, definition, 344–345 Body language factors in communication, 188–189 Boeing, 200 “Boss” approach, 294 Bottom line result focus, 100 Boundaryless organization, definition, 345 Box plots, 124 Brainstorming, definition, 345 Breakthrough, definition, 345 Bullseye model, 236–237 Bureaucratic layers, 140 Business acumen in leadership, 151 Business case definition, 352 proof of value of six sigma concept, 200 Business metrics, 317 Business processes, definition, 345
C Canon, 167 Capability, 304 Caterpillar, 200 Cause-and-effect analysis, 37, 38, 126 diagram, 39, 40 Cause-and-effect diagram, 303 definition, 345 Causes common, 45, 46, 121 changes to address, 47–48 choosing to change, 46 definition, 345–346 trial periods for changes, 48 incorrect identification of, 46 root causes and problems, 91
special, 45, 46, 121 choosing to change, 46 stabilizing a process, 47 c-Chart, 303 Champion definition, 345, 354 training, 73, 106, 107 Champion International, 200 Change antecedent conditions, 144–145 consequent conditions, 144–145 defined in outcomes, 36 difficulty of cultural, 235 evaluating effects of, 48–50 factors for, 266–267 fear of, 113 individual resistance to, 147 levels of, 143–144 managed, 235 not forced, 235 permanent behavior alteration, 143 personal, 235–236 process for, 234–235 reactions to, 22–23, 80 recognition of, 113 resistance, sources of, 147 social influence, 143 stages of individual, 144–145 strategies for, 266–267 strategy, 148 understanding of for team approaches, 233–234 Change agent, definition, 345 Changeover, definition, 345 Characteristics, definition, 345 Chartering of teams, 284–287 definition, 345 meeting to, 286 Charts used in problem solving, 303, see also Specific chart types Check phase of TQM, 43–46 Check sheet, 303 definition, 345, 355 Chi-square statistic, 327 Chronic waste, reduction of, 11 Churchill, Winston, 90 Coaching, 85–86, 293 definition, 345 men coaching women, 192 by mid-level leaders, 22 skills, 86–88 stagnation as a result of poor, 88–89 Collective decision, 271–272 Collective thought, 26 Commando team leadership, 293
SL316X_frame_INDEX_V. I Page 549 Monday, September 30, 2002 8:21 PM
Index for Volume I Commitment definition, 279 as ownership of process, 265 Commitment, by top management, definition, 356 Common cause, see Causes, common Common causes, definition, 345–346 Communication, 283–284 attentiveness, 179 blockers, 180–181 body language, 188–189 communicating significance of issues, 183 and cultural diversity, 195 digression, 289–290 discounts, 289 effective, 178–179 emotional factors, 181 and employee morale, 177 environment, 190 eye contact, 189 gender themes, 190–194 importance of, 24 individual styles, 281 by leaders, 149 listening skills, 179 nonverbal language, 187–190 messages, 186 open, 24, 141–142 personal space, 188 plops, 289 between quality personnel and operators, 184–186 receiver factors, 179 relationship with sender, 178 response types, 180–181 sender factors, 177–178 relationship with receiver, 178 tangents, 289–290 in teams, 165, 204 training, 182–184 vocal dimensions, 189 Communications, in problem-solving, 339–340 Complaint resolution, 82 Compliance, 146, 182 with influence, 143, 144 Concern analysis, 304–305 Concurrent design, 29 Concurrent engineering, 58, 66 Conflict resolution definition, 346 in teams, 262 Conformance, definition, 346 Confucius, 141
549 Consensus decision making, 271–272 definition, 346 reaching, 273 recognizing, 273 Consequent conditions, 183 and change, 144–145 Constancy of purpose, definition, 346 Constraint, definition, 346 Constraint management, definition, 346 Construct, definition, 346 Consultative decision making, definition, 346 Consumer market customers, definition, 346 Consumer six sigma, 95 Consumer’s risk, definition, 346 Continual improvement, 9, 10, 20, 49, 209–210 commitment to, 79–81 use of disciplined methodology, 55–56 Continuous data, definition, 346 Continuous learning, 77 Continuous decision tools, fundamentals checklist, 326–327 Control, 337–338 of project, 128–129 Control charts, 303 cause identification using, 46 definition, 346 in process analysis, 44–45 Control (DMAIC phase), 315 definition, 346 fundamentals checklist, 331 Control of process, definition, 346 Control plan, definition, 347 COPQ (Cost of poor quality), 114 definition, 347 Core competency, definition, 347 Corporate culture, definition, 347 Correction vs. prevention, 11 Corrective action, definition, 347 Correlation chart, 303 Correlation coefficient, definition, 347 Correlation, definition, 347 Cost-benefit analysis, definition, 344 perspective and six sigma methodology, 114 Cost of poor quality (COPQ), 114 definition, 347 Cost of quality, 65–66, 304 Cost-reduction initiative, 92 Courage of leaders, 149 Cp, 95, 98, 324 Cpk, 95, 98, 324 CPM (Critical path method), definition, 347 Crawford slip method, definition, 347
SL316X_frame_INDEX_V. I Page 550 Monday, September 30, 2002 8:21 PM
550
Six Sigma and Beyond: The Implementation Process
Creativity in jobs, 140 in teams, 165–166 in upper management, 140 Criteria matrix, definition, 347 Criterion, definition, 347 Critical incident, definition, 347 Critical path definition, 347 method (CPM), definition, 347 Critical to process characteristic (CTP), 320 Critical to quality characteristic (CTQ), 320 Critical to quality (CTQ), 96 requirements, 96 Critical to satisfaction characteristic (CTS), 320 Cross-functional team, see also Teams definition, 347 Cross-functional thinking, 79 CSR (Customer service representative), definition, 347 CTP (Critical to process characteristic), 320 CTQ (critical to quality), 96, 324, 331 characteristics measurement, 121 CTQ (Critical to quality characteristic), 320 CTQ (Critical to quality) requirements, 96 CTS (Critical to satisfaction characteristic), 320 CTX (process) tree, 96–97, 320 CTY (product) tree, 96–97 Cultural change, 51, 52, 104 difficulty of, 235 to support teams, 234 Cultural diversity and communication, 195 Cultural variations in training approaches, 183, 187 Culture, definition, 347 Cummins Engine, 200 Cumulative sum control chart, definition, 347 Current reality tree, definition, 347 Customer alignment of expectations and requirements, 11 defines quality, 116 definition, 348 delighting, 91 focus, 6 loyalty, 201 requirements, 11 satisfaction, 6 and front-line workers, 22 satisfaction study, 13–14 voice of, see Voice of the customer (VOC) Customer driven, 81 Customer focus, 116–117, 317 Customer loyalty, 69–70
Customer needs, 117 project conformance to, 61, 62 Customer requirements, 58 definition, 348 determining, 117–118 Customers external, 116 internal, 116 types, 116 Customer service representative (CSR), definition, 347 Cycle time, definition, 348
D Data analysis, 124–125 fake, 14, 16–17, 42 types, 123 Data collection, 122–123, 321 baseline information, 41–42 on causes, 42 process, 122–123 simplicity in, 49 Data entry, 15 Data mining, 14, 14–16, 42 Decentralized organization, 19 Decision collective, 271–272 optimum, 6 Decision making, 81–82 consensus, 271 styles, 273 without consensus, 274–275 Decision procedures in teams, 205 Decision tools, fundamentals checklist, 327–328 Decisiveness in problem-solving, 336 Defect, definition, 348 Defective, definition, 348 Defect opportunities, 320 Defect opportunity, definition, 348 Defects per million opportunities (DPMO), see DPMO (Defects per million opportunities) Defects per opportunity (DPO), 96 definition, 348 Defects per unit (DPU) goal, 324 Define (DMAIC phase), 114–120 definition, 348 fundamentals checklist, 319–321 outline, 313 Define-measure-analyze-improve-control (DMAIC) model, see DMAIC model Degrees of freedom, 327
SL316X_frame_INDEX_V. I Page 551 Monday, September 30, 2002 8:21 PM
Index for Volume I Delegation, 337–338 Delighting the customer, 91 Deployment time-line, 74, 95 Design, as customer loyalty factor, 69 Design for six sigma (DFSS), definition, 348 Design issues, addressed by six sigma, 102 Design of experiments, 304 Designs, robust, 11, 12 Design tools, 330 DFSS (Design for six sigma), definition, 348 DFSS principles, fundamentals checklist, 331 Dialogue conditions for effective, 25 conducting, 26 definition, 23–24 environment for, 24 importance of, 25 process, 26–27 purpose of, 25 rules for, 26 use of facilitator, 26 Dialogue session, conducting, 26 Differences, valuing, 24 Digital Equipment, 200 Digressions, 289–290 Dimensions and terms, 333–334 Diogenes, 140 Discipline effective application to individuals, 88 signs of lack of in organization, 56 Disciplined methodology, 55–56 Discounts in communication, 289 Discrete data, definition, 348 Discrete decision tools, 327–328 Distribution diagram, 303 Diversity, 194–195 and communication issues, 187 DMAIC (Define-measure-analyze-improvecontrol), 97–98, see also Specific phases definition, 348 model, 71–73, 120–121 factors in, 72–73 outline, 313–315 variation of PDSA model, 72 Documentation, 128–129 of process improvements, 49 DOE/ANOVA method, 332 Dominating team members, 291–292 Do phase of TQM, 37–43 Downstream, definition, 348 DPMO (Defects per million opportunities), 96 definition, 348 DPMO report, 114 DPO (Defects per opportunity), definition, 348
551 DPU (Defects per unit) goal, 324 Driver, definition, 6 Dynamic statistics, 323
E Effectiveness definition, 348 measures, 121 Efficiency definition, 349 measures, 121 Eight sigma, 102 Empirical modeling tools, fundamentals checklist, 330 Employee development, 146–147, 149 Employee involvement, in teams, 255 Empowerment, 17–18, 54–55, 266 ingredients, 266 skills for, 246 of team members, 246 Engineering quality, 32 reliability, 32 Enthusiasm, 89–90 Environmental effects in communication, 190–194 Error propagation, 331 ESC (Executive steering committee), 47 Evolution of teams, 251–252 EWMA (Exponentially weighted moving average) chart, 332 Executive overview training, 107 Executive piracy, 136 Executive steering committee (ESC), 47 Expectations, 78–79 Experimental design, 328 parameter design, 66 tolerance design, 66 Experimentation, in teams, 165–166 Exponentially weighted moving average (EWMA) chart, 332 External failure, definition, 349 Eye contact, 189
F Facilitator for dialogue, 26 duties, 242–243 in meetings, 260–261 process facilitation skill, 238–239
SL316X_frame_INDEX_V. I Page 552 Monday, September 30, 2002 8:21 PM
552
Six Sigma and Beyond: The Implementation Process
on quality team, 263 questioning skills, 270–271, 272 role, 28 skills, 239, 240 in team development, 240 team, effective, 270 in teams, 293 use of, 24–25 Factorial experiment, 328, 329 Fads, see Short-term solutions Failure mode and effect analysis (FMEA), 304 Failure, when taking risk, 84 Fake data, 14, 16–17, 42 “Fallen stars”, 86 F distribution, 322 Feedback, 299 definition, 279 managing, 279–280 TIPS test, 87 Feuding between team members, 290 First-time yield (Y.ft), 323–324 Five sigma wall, 319 Flattening the organization, 233 Flexibility, 83–84 Floundering, 287–288 Flowchart, 303 definition, 352 Flow-chart method, 35 “as is”, 37, 38 Flow diagram, 61, 62 FMEA (Failure mode and effect analysis), 304 Focus, 10, 78 on action and results, 78 on bottom line results, 100 Force field analysis, definition, 349 Forces in the manager, 153–154 in the situation, 153, 155 in the subordinate, 153, 154 Ford Motor Company, 167, 200 “Forming” stage of team building, 209 Fractional-factorial experiment, 328, 329 Frame error, 14 Frequency plot, definition, 349 Friendship between genders, 193 Front-line workers, and customer satisfaction, 22 Full-factorial experiment, 328, 329 F value, 327
G Gainsharing, 66 Gaps, 120
Gauge R and R (Gauge reproducibility and repeatability), 304 Gender factors accountability, 193–194 in communication, 190–194 intimacy and sexuality, 192–193 power, 190–191 support, 191–192 friendship between genders, 193 reactions to power, 191 General Electric, 200 General Mills, 200 General Motors, 200 Goals, 10 (addressed by) TQM, 33–35 communicating, 20 driven by customer requirements, 60 for improvement, 59–60 measurable, 34 of project effort, 115 realistic, 59–60 relevant, 34 Goal statement, definition, 349 Gordian knot approach, 136, 141–142 Graphical presentations, 124–125 Green belt, as coach, 85–86 Green belt, definition, 349 “Griping and grasping” stage of team development, 207–208 “Grouping” stage of team development, 208 Groups, see also Teams development into teams, 158, 163, 170–171, 206–208 transition to teams, 174 Group-think, avoiding, 295–296 GROW, 85 GRPI model, 236–237 Guidance team improving process, 216 obligations of, 216 responsibilities during process improvement, 257–258 Gurus, 99
H Haig, Douglas, 137 Handoff, definition, 349 Herschel, Abraham, 9 Heteroscedasticity, 330 Heteroscedasticity, definition, 349 Hewlett Packard, 167 Hidden factory, 96 Hindsight analysis, 181
SL316X_frame_INDEX_V. I Page 553 Monday, September 30, 2002 8:21 PM
Index for Volume I Histogram, 303 definition, 349 in process analysis, 44 Human resources, 6 Hypothesis statement, definition, 349
I Identification, 182 Identification with influencer, 143, 144 Image, professional, 256 Imagineering, 35 Implementation planning, 127–128 Improve (DMAIC phase), 314, 328–329 definition, 349 Improvement continual, 9, 20 evaluating, 64–66 goals, 59–60 level of change, 126 measurement in evaluating, 64 opportunities, 63 ranking, 63 plan, 203 problem indicators, 203 projects, 60–61 Improvement cycle, continuous, 49 Improvement methodologies, projects using, 61–64 Individual, relationship to organization, 218 Inference, Ladder of, 28 Influence, acceptance of, 144–145 Influencer, sources of power, 183 Information analysis, 6 baseline, 37 outcome, 36 output, 36 Initiative, in problem-solving, 340 Innovation encouraging, 17 importance of, 10 in leadership, 149 in problem-solving, 340 Input, definition, 349 Input measures, definition, 349 Inspections overuse, 35 by quality control, 29–30 Institutionalization, 128–129 definition, 349 “In-Sync-Erator”, 82 Integrity, 89–90 of leaders, 149
553 Interdependencies of teams, 158 Interfaces, managing, 29 Internal auditing, 31 Internalization, 144, 146, 182 Interpersonal communication in teams, 165 Interpersonal skills, 184 Intimacy and sexuality and communication issues, 192–193 Involvement levels, 279 Involvement of people, 54–55 ISO-9000, definition, 349–350
J Joint leadership, 167, 168 Joint performance, 56–57 Judgment behaviors, 335–336 Judgment sampling, definition, 350
K Kaizen approach, 255–256 Kaizen blitz, definition, 359 Kano model, 11, 116
L Ladder of Inference, 28 “Large rocks first” anecdote, 107–108 Leader/servant, 150, 153–154 Leadership, see also Forces, Management attributes, 149–151 “boss” approach, 294 business acumen, 151 characteristics, 20–21 communication by, 20, 150 continuum, 293–294 core values, 18–19 effectiveness, 148 elements, 148 employee development, 150 executive, 6 forces in the manager, 153 innovation, 151 joint, 167, 168 leader/servant, 150, 153–154 maintenance aspect, 151 mid-level, 22 outputs, 155–157 persistence, 150 personality traits, 148 principles, 19 in problem-solving, 338–339
SL316X_frame_INDEX_V. I Page 554 Monday, September 30, 2002 8:21 PM
554
Six Sigma and Beyond: The Implementation Process
as process, 148 quality methods understanding, 151 responsibilities, 154 senior, 21 service orientation, 150 situational leadership, 173–174 styles, 20, 145–146, 148, 151 effectiveness of, 157 continuum, 151–153 subordinates role in, 151 system thinking, 150 task aspect overemphasis, 151 teamwork, 150 values, 18–19 Learning continuous, 10, 77 organizational, 23–24 Levels of involvement, 279 Likert scale, 321 Lincoln, Abraham, 77 Listening active, 276 definition, 275–-276 skills, 179 improving, 277–278 Loss function, 65–66, 100 Loss to society, 65–66 “Low fruit”, 78–79 savings, 74 LTV Steel, 200 Luck, bad, 46
M Machiavellian power, 136 Main-effect plot, 329 Malcolm Baldridge, 50–51 Management, 43–46, 50, 227 acceptance of cost and prolonged process, 50–51 in Act phase of TQM, 46–50 attitude, 29, 136, 141 as impediment to process improvement, 49 bean counters, 138 of change, 235 in Check phase of TQM, 43–46 commitment to team success, 228–229 creativity needed in upper levels, 138 duties for teams, 245 effectiveness, and communication, 177 by fact, 20 layers, 140 morale, 140
objectives shifting, 311 patience to see process through, 92 in Plan phase of TQM, 43–46 quality system responsibilities, 50–51 responsibilities, 140 role in team success, 226–228 service to subordinates, 137 short-term profits attitude, 136 style, 20 styles in teams, 293–294 as team champion, 227 team facilitation tools, 303 tools for facilitating teams, 303 Management-by-fact, definition, 350 Management, systems approach to, definition, 355 Market share, increasing, 102–103 Maslow's hierarchy of needs, 171, 172 Master black belt as coach, 85 definition, 350 training, 106 Matrix chart, 303 Matrix data analysis, 303 Matrix management, 253 Maturity stages, 146 Maximization model, 139–140 Mean time between failures (MTBF), 318 Measure (DMAIC phase), 313 definition, 350 fundamentals checklist, 321–323 Measurement, 13–14 consistency, 123 of process, 120–121 of progress, 20 for project implementation, 61, 62 scales, 321 Measurement error, 317, 322 Meeting process check, 278–279 Mental models, 28 Mentoring, see Coaching Methodology, structured, 29 Metrology, 31 “Middle stars”, 86 Mission of organization components, 58–59 Mission statement for teams, 284 unique to each team, 265 Modeling tools, fundamentals checklist, 330 Models, mental, 28 Moment of truth analysis, 125 definition, 350 Monitor, watchful, 18 Monte Carlo methodology, 330 Morale and communication, 177
SL316X_frame_INDEX_V. I Page 555 Monday, September 30, 2002 8:21 PM
Index for Volume I Motorola, 95–96, 98–99, 104 Moving range chart, 303 MTBF (Mean time between failures), 318 Multiculturalism, 194–195 Multivoting, definition, 350 Myers-Briggs categories, 280, 282–283
N NGT (Nominal group technique), see Nominal group technique (NGT) Nominal group technique (NGT), 62, 63 Nonconformity, 138–139 Nonresponse error, 14 Nonvalue-adding activities, definition, 350 Nonverbal language, 187–190 messages, 186 Normalized yield, 96 Norming, 171 Null distribution, 326 Null hypothesis, 326
O Open door policy, 82 Operational definition, definition, 350 Opinions as facts, 288 Opportunities criteria for selecting, 224–225 quantifying, 125 Opportunities for defects, 320 Opportunity statement, 225 definition, 351 Optimization model, 139–140 Organization component's missions, 58–59 decentralized, 19 flattening, 233 relationship to individual, 218 Organizational cultures, 100 Organizational learning, 23–24 Organizational sensitivity, 336–337 Organizational values, 90 Organizations, traditional, definition, 357 Outcomes, evaluating effects of changes on, 48 Output definition, 350 measures, 121 Output measures, definition, 350 Overbearing team members, 290–291
555
P Paradigm change, 5 Paradigms, 233 Paralanguage, 189 Paraphrasing and active listening, 276 definition, 278 Pareto analysis, 124 of baseline data, 42–43 Pareto chart, 43, 303 definition, 350 Pareto principle, definition, 351 Participation, 24, 55 levels of, 293 in teams, 203–204 Participation philosophy, 199 Participative management, 135–136, 295 Parts-per-million tolerances, 12 Patience importance, 17 vs. rush to “do something”, 288–289 by senior management, 92 PAT (Process action team), 216 in act phase of TQM, 46–50 in check phase of TQM, 43–46 data collection function, 38–43 definition, 37, 217 role in improvement projects and plans, 61 Pay considerations, for SDWTs, 268 p chart, 332 p-Chart, 303 PDPC( Process decision program chart), 303 PDSA, 66 PDSA (Plan-Do-Study-Act) model, see Plan-DoStudy-Act PDSA (Plan-Do-Study (Check)-Act), definition, 351 People involvement, 54–55 People problems, 298–299 Percent defective, 12 Perfomance improvement cycle, chart, 61 Performance evaluations, 140 expectations, 87 Performance improvement, of teams, 256 Persistence, 10, 149 Personal change, 235–236 Personal needs and work needs, 89 Personal space factors in communication, 188 PERT (Program evaluation and review technique), 303 Pilot, definition, 351 Pilot program steps, 127
SL316X_frame_INDEX_V. I Page 556 Monday, September 30, 2002 8:21 PM
556
Six Sigma and Beyond: The Implementation Process
PIM (Project improvement methods), see Project improvement methods (PIM) Plan-Do-Study (Check)-Act (PDSA), definition, 351 Plan-Do-Study-(Check) and Act methodology, TQM-based, 33 Plan phase of TQM, issues addressed, 33–36 Plan-train-apply-review (PTAR), 321 Plops, 289 PM (Project management), see Project management (PM) PMS (Process management structure), 253 in work environment, 252 Pogo, 69, 141 Poisson distribution, 322, 332 Power base principle, 264 Power, displaying, 190 Pp, 324–325 Ppk, 95, 98, 324 Precision, definition, 351 Precontrol tools, 331 fundamentals checklist, 331 Preliminary plan, definition, 351 Presentation skills, 256 Prevention vs. correction, 11 Preventive maintenance, 137 Principle of complementarity (duality), 141 Prioritizing, 10 Problem defining, 300 forces in, 155 identification, 297–298 types, 297–298 Problem definition, 302–303 Problem identification, 302–303 Problem indicators, 202–203 Problems people-caused, 298–299 team approach to easy and major, 254–255 Problem solving, definition, 297 Problem-solving methodology, 29 analysis behaviors, 335 communications, 339–340 control, 337–338 decisiveness, 336 delegation, 337–338 effectiveness, 82–83 initiative, 340 innovativeness, 340 judgment behaviors, 335–336 leadership in, 338–339 and organizational sensitivity, 336–337 organizing, 340–341 planning, 340–341 planning by teams, 299
principles, 299–300 steps in, 301–302 team member development, 338 tools, 303 work standards for, 337 Problem-solving process, use of, 301–302 Problem statement, 115, 301 definition, 351 Process, 36, see also Causes analysis, 125 baseline information collecting, 41–42 capability, 123–124 consultants, 46 defining, 61, 62 departmentalizing, 35 facilitation, 238–239 flow-chart method, 35 focus, narrow, 35 improvement, 29 improvement sequence, 64 mapping, 35 measurement, 65 measures, identifying, 40–41 monitoring, 49, 129 not centered, 123 output baseline development, 37–38 stabilizing, 47 streamlining, 35 variables, 36, 39 categories of, 38–39 variation, 121 voice of, 123–124 Process action team (PAT), see PAT (Process action team) Process awareness by teams, 205 Process baselines, 320 Process capability, definition, 351 Process characterization, definition, 351 Process decision program chart (PDPC), 303 Process definition, 119 Process design, definition, 351 Process drift, 331 Process facilitation, definition, 240 Process flow, 34–35 Process flowchart, definition, 351 Process improvement definition, 351 documenting, 49 Process improvements, standardizing, 49 Process management definition, 351 evolutionary process, 251 Process management structure (PMS), in work environment, see PMS (Process management structure)
SL316X_frame_INDEX_V. I Page 557 Monday, September 30, 2002 8:21 PM
Index for Volume I Process map, definition, 351 Process mapping, 118, 119–120, 320 Process measures, definition, 351 Process metrics, fundamentals checklist, 324–325 Process optimization, definition, 351 Process quality, improving, 64 Process redesign, definition, 351 Process shift, 331 Process team management, definition, 251 Procter & Gamble, 200 Procter and Gamble, 200 Producer's risk, definition, 352 Program evaluation and review technique (PERT), 303 Progress measures, 6 Project failure warning signs, 88–89 implementation plan example, 73–74 leader's duties, 241–242 lead person's responsibilities, 90 measurement, 65 scope, 115 Project champion, training, 73 Project improvement methods (PIM), 36 Project leader duties, 241–242 on quality team, 263–264 Project management (PM), areas of concern, 127 Project rationale, definition, 352 Projects, 320 control, 128–129 facilitator duties, 241–242 factors in, 109 failure to adhere to process, 312 generating solutions, 126 implementing, 61–64 management duties, 245 problem signs, 140 recorder's duties, 243–244 selecting solutions, 127 selection, 107–109 selection criteria, 110 signs of problems, 140 steps in, 110–111 timekeeper's duties, 244 Project selection, business case, 115 Proportion defective, definition, 352 PTAR (Plan-train-apply-review), 321
Q QA (Quality assurance), definition, 352 QC (Quality circle), definition, 217
557 QFD (Quality function deployment), 116 in analysis of problem, 304–305 applying to problem solving, 306–307 scoping of project using, 304–305 Quality as added task, 141 causes of, 37 components, 10–11 customer defined, 11, 116 definition, 9, 352 function of parameters, 69 measures, 12 misconceptions, 32–33 supplier, 32 Quality assurance, 31–32 Quality assurance (QA), definition, 352 Quality board, role in improvement projects and plans, 60 Quality boards, 56–57 Quality circle (QC), definition, 217 Quality circles, 66 Quality control, functions, 29–31 Quality council, 262 definition, 352 Quality department, functions, 32–33 Quality engineering, 32 Quality function deployment, 304 Quality function deployment (QFD), see QFD (Quality function deployment) in analysis of problem, 305–306 applying to problem solving, 306–307 scoping of project using, 304–305 Quality improvement cycle, 257 Quality improvement process, definition, 256–257 Quality initiatives, bogus, 9 Quality loss function, 65–66 Quality methods and leadership, 151 Quality officer, 262–263 Quality of work life (QWL), see QWL (Quality of work life) Quality personnel communication with operators, 184–186 submissive behavior undermining message, 185–187 Quality planning, strategic, 6 Quality professional opposition to use of, 23 role, 23, 24, 26 Quality system establishing, 50–58 steps in starting, 57 Quality team environment, focal participants, 262–263 Questioning skills, 270–271, 272
SL316X_frame_INDEX_V. I Page 558 Monday, September 30, 2002 8:21 PM
558
Six Sigma and Beyond: The Implementation Process
Quick fixes, see Short-term solutions QWL (Quality of work life), 136 QWL (quality of work life), 138
R Random sampling, definition, 352 R chart, 303, 332 Rebels, 139 Recognition, 84 phase of six sigma implementation, 113–114 Recognition generating repetition, 81 Recorder's duties, 243–244 Reengineering, 35, 304 definition, 352 Reliability engineering, 32 Reluctant team members, 292 Repeatability, definition, 352 Reproducibility, definition, 352 Resource allocation for teams, 286 Responsibilities, 90 Revision plan, definition, 352 Rework loop, definition, 352 Right the first time, definition, 352 Risk, alpha, 325–326 Risk analysis tools, fundamentals checklist, 331 Risk assessment, definition, 353 Risk, beta, 325–326 Risk equation, 264 Risk management, definition, 353 Risk, producer's, definition, 352 Risk taking, 84 encouraging, 17 Roadblocks, 254 Robust designs, 11, 12 Robust design tools, fundamentals checklist, 330 Robustness, definition, 353 Role definition in teams, 203–204 Role-playing, definition, 353 Rolled throughput-yield (Y.rt), 96, 323–324 definition, 353 Root cause, 297, 299 analysis, 125 Root cause analysis, definition, 353 Root sums of squares (RSS), 331 RSS (Root sums of squares), 331 Run chart, 48, 124–125, 303 definition, 353 in process analysis, 44, 45 Run, definition, 353
S Sampling, 13, 123 definition, 353 error, 14 formula, 14 guidelines, 124 sample size testing, 16–17 Sampling bias, definition, 353 Sampling, random, definition, 352 Sampling, stratified, definition, 354 Scales of measure, 321 Scatter diagrams, 125 in process analysis, 44, 45 Scatter plot, 303 definition, 353 Scientific approach used by teams, 206 Scope, definition, 353 SDWT (Self-directed work team), 217–218, see also Team building, Teams environment for, 229 implementation model, 232 pay considerations, 268 payoffs, 267–268 preparations for, 230–232 requirements for success, 229–230 SDWT (Self-directed work team), stages of, 232, 233 Self-development, 77 Self-directed work team (SDWT), see SDWT (Self-directed work team) Self-fulfilling prophecy, 78 Self-managed teams, see SDWT (Self-directed work team) Service industry as-is flowchart, 37, 38 baseline information, 38 contingency guidance, 119 Service quality, keys to, 103 Serving as a leader, 149 Seven sigma, 102 Seven-step sequence model, 50 Sexual harassment, 193 Sexuality and intimacy, in communication issues, 192–193 Shenandoah Life, 200 Shogun six sigma master, 106 Short-term profits attitude, 136, 137 Short-term solutions, 46, 56 Should-be process mapping, definition, 353 Sigma, and standard deviation, 95 Sigma scale of measure, 3 Sigma (σ), definition, 353–354 Significance testing, 16
SL316X_frame_INDEX_V. I Page 559 Monday, September 30, 2002 8:21 PM
Index for Volume I Simulation tools, 325 Simultaneous engineering, 29 SIPOC definition, 354 model, 97, 118, 119–120 Situational leadership, 173–174 Six sigma definition, 354 goal, 3, 4 Six sigma methodology, see also Black belt, Green belt, Master black belt, Shogun six sigma master areas of application, 94 business metrics, 317 case proof of value, 200 and consumer six sigma, 95 core competencies outline, 317–323 cost/benefit perspective, 114 costs, 114 customer focus, 317 definition, 3 deployment, 321 deployment time-line, 74 elements, 75, 76–90 executive overview training, 107 as fad, 91 failures of application, 100 focus, 71 fundamentals checklist, 317–319 implementation plan example, 104 innovative implementation, 5 loyalty as focus, 71 measurement error, 322 model, 75, 76 for nonproduction functions, 94 and older methodologies, 99–100, 101–102 origin, 98–99 and other programs, 94 paradigm change, 5 as paradigm shift, 187 (possible) resource addition for, 93 speed of results, 92 static statistics, 322–323 statistical distributions, 322 team orientation, 83 timing of implementation, 113–114 and TQM, 94 Social influence, 143 Socialization, 191, 192 Solution-Plus-One Rule, 82 Solution statement, definition, 354 SPC (Statistical process control), 13, 66 definition, 354 in project control, 129 SPC chart, 332
559 SPC tools, fundamentals checklist, 332 Special causes, see Causes, special definition, 354 Specifications, accuracy, 11 Sponsor, definition, 354 Sponsor of quality team, 263 SSBB (Six sigma black belt), 321 SSC (Six sigma champion), 321 SSMBB (Six sigma master black belt), 321 Stacking of tolerances, 12 Stages of SDWT (Self-directed work team), 232, 233 Stagnation, 88–89 Standard deviation, and sigma, 95 Standardization, 182 Star catcher, 84 Static statistics, 322–323 Statistical distributions, 322 Statistical hypotheses, 325–326 Statistical process control (SPC), see SPC (Statistical process control) Statistician consultants, 46 consulting, 16 Statistics tools, 12 Steering group, role in improvement projects and plans, 60 Stereotypical characteristics, 280, 283 Storming, 174 “Storming” stage of team building, 209 Storyboard, definition, 354 Storyboarding, 308–309 definition, 354 Strategic architecture of teams, 201 Strategic fit review, definition, 354 Strategic planning, definition, 354 Strategic quality planning, 6 Stratification, definition, 354 Stratified sampling, definition, 354 Streamlining, 35 Structured methodology, 29 Style, 20 Submissive behavior by quality personnel, 185–187 Suboptimization, 35 “Super stars”, 86 Supervisors, role in teams, 246–247 Supplier, definition, 355 Supplier-organization-customer feedback loops, 59 Supplier quality, 32 Support, 191, 192 behaviors, 191 systems, 56–57
SL316X_frame_INDEX_V. I Page 560 Monday, September 30, 2002 8:21 PM
560
Six Sigma and Beyond: The Implementation Process
Survey instruments, construction and application, 307–308 Symptom, definition, 355 Systematic sampling, definition, 355 System, definition, 6, 355 Systemic thinking, 149 Systems approach to management, definition, 355 Systems, organizational, 5 Systems thinking, 79–80
T T-account approach, 28 Tacit knowledge, definition, 355 Tactical plans, definition, 355 Tactics, definition, 355 Taguchi loss function, definition, 355 Taguchi orthogonal arrays, 329 Takt time, definition, 355 Tally sheet, definition, 355 Tampering, definition, 355 Tangents, 289–290 Task leader, on quality team, 264 t distribution, 322 TDPU (Total defects per unit), 324 Teaching, see Coaching Team definition, 355 effectiveness, 155 orientation, 83 recognition, 84 Team-based structure, definition, 355 Team building, 158–161, 172–173, see also Teams concerns during different phases of, 210–212 conditions for, 209 definition, 355 development sequence, 209–210 facilitator use, 240 goals, 158 goals defined after team formation, 236–237 group development, 206–208 grouping stage, 208 GRPI model, 236–237 initial, 286–287 and leadership style, 172–173 norming stage, 171 obstructions, 159 phases of, 210–212 steps in, 160 storming stage, 209 Team concept, proof of value, 200 Team development, see Team building definition, 355
Team dynamics, definition, 355 Team environment 5s approach, 266–267 people handling, 280, 281 Team facilitation, definition, 356 Team implementation, 221–222, see also Teams criteria for opportunity selection, 224–225 eight-step model, 222–223 schedule, typical, 247–249 Team improvement, definition, 251 Team leader addressing floundering, 287–288 discounts, handling, 289 handling of unfocussed communication, 289–290 handling problem individuals, 291, 292 plops, handling, 289 styles, 292–293, 294–295 Team meetings facilitator role, 260 operating procedures, 284 planning, 259–260 rules for, 260 Team members attribution by, 291–292 characteristics, 261 development of, 338 dominating, 291–292 duties, 244–245 effective, 269 effectiveness factors, 265 feuding, 290 overbearing, 290–291 problem individuals, 290–292 reluctant, 292 role in team leadership, 264–265 selection of, 286 Team operating procedures, mechanics, 280–281, 283–284 Team performance evaluation, definition, 356 Team players, 168 Team recognition, definition, 356 Team rewards, definition, 356 Teams, 163, 173, see also Groups, Other team topics alignment of purpose and goals, 287 authority within, 246, 247 behavior problems, 204–205, 226 behaviors in successful, 215–216 behavior styles in, 269 beneficial behaviors, 204–205 benefits of, 267 building trust in, building, 166–168 bullseye model, 236 championed by management, 227
SL316X_frame_INDEX_V. I Page 561 Monday, September 30, 2002 8:21 PM
Index for Volume I characteristics of best and worst, 212, 214–215 chartering process, 284–287 commitment to, 265 communication issues, 165, 204 conflict management, 170–171 conflict resolution, 164, 170–171, 262 consensus failure and floundering, 287–288 creativity in, 165–166 criteria for creating, 268 critique of, 166 cultural changes in support of, 234 decision making in, 165, 205, 271, 274-275 definition, 158, 199 development process stages, 170 distrust of, 139 dynamics, 226 dysfunctional activity, 287 effectiveness, 164, 166, 253 reduction of, 254 critique, 163–166, 169 measurement, 258 questionnaire, 169–170 eight-step model, 222–223 emotional reactions, 5 employee involvement, 255 empowerment of team, 266 of members, 246 evaluation of, 166 evolution of teams, 251–252 expectations of, 212, 213 experimentation in, 165–166 failure of, 5 fear of, 135 feuding in, 290 floundering, 287–288 forming stage, 209 functioning of, 162–163 goal-setting process model, 237–238 ground rules, 205–206 group-think, avoiding, 295–296 guidance team responsibilities, 257–258 implementation facilitation, 237–238 implementation schedule, typical, 247–249 influence in, 207–208 as innovative approach, 5 interdependencies of, 158 interpersonal communication in, 165 kaizen approach, 255–256 leader's duties, 241–242 maintenance functions in, 164 management commitment, 227–228, 228–229 management role in success, 226–228 management's duties, 245
561 for managing interfaces, 29 and Maslow's hierarchy of needs, 172 member's duties, 244–245 member selection, 286 mission statement, 265, 284 morale, 173 Myers-Briggs categories of members, 282–283 need for, 199–200 opinions as facts, 288 organizational expectations, 135 orientation, 207 ownership of process, 265 participation, 203–204 participation levels, 293 patience vs. rush to “do something”, 288–289 performance improvement, 253-254, 256 phases of, 210–212 post-objective motivation, 269 power in, 207–208 problems, 245, 261 addressing “easy”, 254 indicators of, 202–203 major, 254–255 problem-solving in, 165 process awareness, 206 productive, 161 reasons for, 201–202, 221-222 recognition, 209 recorder's duties, 243–244 resource allocation, 286 rewards, 209 role definition, 203 problem indicators, 203–204 roles in, 115, 242 shared goals, 163–164 single purpose, 168–169 size of, 209 strategic architecture, 200–201 successful, 202–203 and supervisors, 246–247 termination, 174 tools, 304 transition from groups, 174 trust within, 164 unproductive, 161–162 unsuccessful, causes of, 311 Team success, built on individual achievement, 311 Teamwork and leaders, 149 Technical knowledge, 77 Technical training plans and six sigma, 92 Tektronix, 200 Terms and dimensions, 333–334 Texas Instruments, 104
SL316X_frame_INDEX_V. I Page 562 Monday, September 30, 2002 8:21 PM
562
Six Sigma and Beyond: The Implementation Process
TGR (Things gone right) evaluation, 83 TGW (Things gone wrong) evaluation, 83 Theory of constraints (TOC), definition, 356 Theory of knowledge, definition, 356 Theory X, definition, 356 Theory-X manager, 145–146 Theory Y, definition, 356 Theory-Y manager, 145–146 Theory Z, definition, 356 Thing gone right (TGR) evaluation, 83 Things gone wrong (TGW) evaluation, 83 Throughput time, definition, 356 Throughput yield (Y.tp), 323–324 Timekeeper's duties, 244 Time line, deployment, 74 Time plot, definition, 353 Time pressure, 155 Time schedules, unrealistic, 311 TIPS test, 87 TOC (Theory of constraints), definition, 356 Tolerance design (Taguchi), definition, 356 Tolerances definition, 356 stacking, 12 Tolerance tools, fundamentals checklist, 330 Tolerancing, 330 Tools for teams, 304 Tools, statistical, 330 Top-management commitment, definition, 356 Total defects per unit (TDPU), 324 Total productive maintenance (TPM), definition, 357 Total quality management), see TQM (Total quality management) Total quality management (TQM), definition, 357 TPM (Total productive maintenance), definition, 357 TQM (Total quality management), 33 definition, 357 Traceability, definition, 357 Trade-off experiments, 71 Traditional organizations, definition, 357 Training, 57–58, see also Employee development but not allowing application, 141 cultural variations, 183, 187 definition, 357 importance of proper, 137–138 inadequate, 312 in interpersonal skills, 182–184 role of trainer, 172–173 Training evaluation, definition, 357 Training needs assessment, definition, 357 Transactional leadership, definition, 358 Transformational leadership, definition, 358 Transition tree, definition, 358
Tree diagram, definition, 358 Trend analysis, definition, 358 Trend, definition, 358 Trial periods, determination of, 47 Trust, importance, 17 t test, 327 t test, definition, 355 Type I error, definition, 358 Type II error, definition, 358 Type two errors, 16
U UAW, 167 Unit sizes, 139–140 Upstream, definition, 358
V Value-added analysis, 125 Value, as loyalty driver, 70–71 Value-enabling activities, definition, 358 Value to customer, 102–103 Variability reduction programs, 66 Variables, 319 Variation definition, 358 eliminating, 3 identifying sources of, 45–46 measurement, 121 reduction, 12 reduction methods, 12–13 sources, 12 types, 121–122 Vertical functionality, 253 Vocal dimension effects in communication, 189 Voice of the customer (VOC), 117 analysis, 117–118 definition, 358 and QFD, 306 Voice of the process (VOP), 123–124
W Waber, Max, 140 “Walk the talk”, 3, 18, 79 Waste items categorized, 12 reduction, 11 Watchful monitor, 18 WIP (Work in progress), 318 Worker obsolescence, 202 Work in progress (WIP), 318
SL316X_frame_INDEX_V. I Page 563 Monday, September 30, 2002 8:21 PM
Index for Volume I Work process, see Process Work safari, 78 Work standards in problem-solving, 337
X X-bar chart, 303, 332 Xerox, 200 X (variable), definition, 358
Y Y.ft (First-time yield), 323–324 Yield, definition, 359
563 Y.rt (Rolled-throughput yield), 323–324 definition, 353 Y.tp (Throughput yield), 323–324 Y (variable), definition, 358–359
Z Zero defects, definition, 359 Zero investment improvement, definition, 359 Z.gap, 331 Z.lt, 95, 324 Z.shift (dynamic and static), 324 Z.st, 95, 324 Z transform, 322 Z value, 322, 323, 326
SL316X_frame_INDEX_V. I Page 564 Monday, September 30, 2002 8:21 PM
SL316X_frame_INDEX_V. II Page 565 Monday, September 30, 2002 8:20 PM
Index
A Addition decimals, 201–204 fractions, 161–174 whole numbers, 123–128 Affinity diagrams, 59 Application criteria, for global problem solving (GPS), 74–75, 78 Area metric–English conversions, 317–320 S.I. measurements, 290–292 Arrow diagrams, 59 Assessment (problem description), 333–340
B Band-Aid™ fixes, 91 Behaviorism, 8
C Capability indices, 18–19 Cause-and-effect, correlation versus, 18 Cause-and-effect (fishbone) diagrams, 15, 57 Cause selection, 15 Celsius temperature scale, 296–297 Centigrade temperature scale, 296–297 Centimeter–inch conversions, 307–310 Chain of causality, 69–71 Champions, 79, 86, 100 Change situations, 75 Charts control, 16–17, 57 flow, 55–56
Gannt, 111 matrix, 59 matrix data analysis, 59 Pareto, 18, 57 process decision program, 59 Check sheets, 17, 55–56 Chronic versus sporadic problems, 34–35 Common tasks, for global problem solving (GPS), 75, 82, 85, 90, 93, 99, 103 Concern analysis report, 67–69 Continual improvement concept, 33–34 Control charts, 16–17, 57 Coping nonproductive, 3 problem-solving, 3 Core capabilities, 56–57 Correlation versus cause/effect, 18 Cover sheet, for global problem solving (GPS) process, 327 Cross-functional participation, 96 Cube numbers, 218–220 Cube roots, 220–226 Cubic meter (volume), 292–293 Customer identification, 48 Customer requirements, 49
D DaimlerChrysler 7-step cycle, 44 Data gathering, 14–25, 83 potential-cause selection, 15 preventing recurrence, 15–22 protocol, 14–15 stimulus passage, 14 team approach, 22–25 verbalization or “thinking aloud,” 14
565
SL316X_frame_INDEX_V. II Page 566 Monday, September 30, 2002 8:20 PM
566
Six Sigma and Beyond: The Implementation Process
Decimal point placement, 240–247 Decimals addition, 201–204, 259–264 applied problems, 212 division, 206–212, 270–271 multiplication, 204–206, 268–270 numbers greater than one, 253–256 numbers less than one, 257–259 ordering, rounding, and changing, 191–199 subtraction, 201–202, 265–268 Deming, W. Edwards, 38 Density function, Weibull, 22 Department improvement teams (DITs), 42 Department of Defense (DoD) 6-step cycle, 44 Design for six sigma (DFSS) approach, 107–115, see also Six sigma (DFSS) approach Design of experiments (DOE), 19–20, 71–72 Diagrams affinity, 59 arrow, 59 cause-and-effect (fishbone), 15, 57 flow, 16 Pareto, 18, 57 process flow, 16 relationship, 59 scatter, 18, 57 tree, 59 Distribution, Weibull, 22 Division decimals, 270–271 fractions, 179–185 whole numbers, 131–140 DMAIC model, 110–112, 113 DOE (design of experiments) process, 19–20, 71–72
E 8D methodology, 4, see also Global Problem Solving (GPS) Emergency response actions (ERAs), 329–330 Employee involvement, 22–25, 41–53, see also Team approach English–metric conversions, 305–324, see also Metric–English conversions Environment (problem-solving), 12–13 Escape point identification, 88–92 Experiments, design of (DOE), 19–20, 71–72
F Facilitator, 80–81 Factoring, 245–247
Fahrenheit temperature scale, 296, 297–298 Failure mode and effect analysis (FMEA), 20–21 Fear, 38 Fishbone (cause-and-effect) diagrams, 15, 57 5W2H approach, 22–23, 25, 107 Floor (root cause) level, 4, see also Root cause Flow charts, 55–56 Flow diagram, 16 FMEA (failure mode and effect analysis), 20–21 Ford Motor Company, 4, 44 global (8D) problem solving process, 61–104, see also Global problem solving (GPS) process Fractions addition and subtraction, 161–174 multiplication and division, 179–185 parts and types of, 145–148 simplest form and common denominators, 149–159 Full time equivalent (FTE) resources, 110–112 Functional analysis/allocation (FA), 108
G Gage studies, 18 Gannt chart, 111 General Motors (GM) 4-step cycle, 43 Gestalt approach, 6–8 Global problem solving (GPS) process, 61–104 application criteria, 74–75 assessment (problem description), 333–340 change and never-been-there situations, 75 common tasks, 75 concern analysis report, 67–69 cover sheet for, 327 do’s and do not’s, 65–67 emergency response actions (ERAs), 329–330 general overview, 61–65 implementation/validation assessment questions, 353–354 individual and team recognition assessment questions, 357–358 interim containment action (ICA) assessment questions, 341–342 permanent corrective actions (PCAs) assessment questions, 351–352 recurrence prevention assessment questions, 355–356 root cause and escape point assessment questions, 343–349 root cause issues, 69–71 steps, 75–104 1: establish team/process flow, 78–81
SL316X_frame_INDEX_V. II Page 567 Monday, September 30, 2002 8:20 PM
Index for Volume II 2: describe problem, 82–84 3: develop interim containment actions (ICAs), 85–87 4: define and verify root cause and escape point, 88–92 5: choose and verify permanent corrective actions (PCAs), 93–95 6: implement and validate permanent corrective actions (PCAs), 95–98 7: prevent recurrence, 98–102 8: recognize team and individual contributions, 102–104 team/process flow assessment, 331–332 verification, 71–74 Gram/kilogram, 294–296 Graphs, 17–18
H Head-hunting, 38 Histograms, 17, 57
I Illumination step, 5 Implementation, of permanent corrective actions (PCAs), 95–98 Implementation/validation assessment questions, 353–354 Importance, ignorance of, 37 Improvement, continual, 33–34 Inch–centimeter conversions, 307–310 Incubation step, 5 Indices, capability, 18–19 Initial project charter (IPC), 109–110 Insight, 6 Interim containment actions (ICAs), 85–87, 96 assessment questions, 341–342 International system of units (S.I. units), 289–304 cubic meter, 292–293 kelvin, 296–300 kilogram, 294–296 meter, 290–292 technical (derived) units, 302–304 Is/is not analysis, 20
K Kelvin (temperature), 296–300 Kilogram/gram (weight), 294–296
567
L Length (linear) measurements English–metric conversions, 307–310, 316–317 metric, 280–281 S.I., 290–292 Liquid, metric measurements, 285–286 Liter–quart/pint conversions, 312–313
M Management in creating problem-solving climate, 92 demonstration of interest by, 92 fear and self-protection in, 38 head-hunting by, 38 quarterly earnings emphasis of, 38 in setting priorities, 92 Management systems, 16 Mass (weight) measurements metric, 281–283 S.I., 294–296 Mathematics, see also Measurements and individual subtopics decimals addition, 201–204, 259–264 applied problems, 212 division, 206–212, 270–271 multiplication, 204–206, 268–270 numbers greater than one, 253–256 numbers less than one, 257–259 ordering, rounding, and changing, 191–199 subtraction, 201–204, 265–268 fractions addition and subtraction, 161–174 multiplication and division, 179–185 parts and types of, 145–148 simplest form and common denominators, 149–159 proportion, 240–241 scientific notation and powers of 10, 245–251 decimal point placement, 240–247 factoring, 245–247 numbers greater than one, 245–247 numbers less than one, 240–251 square and cube numbers, 218–220 square and cube roots, 220–226 square root applications, 231–234 square root calculation, 225–229 whole numbers addition and subtraction, 123–128 multiplication and division, 131–140 value of, 119–121
SL316X_frame_INDEX_V. II Page 568 Monday, September 30, 2002 8:20 PM
568
Six Sigma and Beyond: The Implementation Process
Matrix charts, 59 Matrix data analysis charts, 59 Measurement cubic meter (volume), 292–293 definitions and principles, 289–290 English–metric conversions, 305–324, see also Metric–English conversions area units, 317–320 cumulative exercises, 323–324 inches to centimeters, 307–310 length units, 316–317 quarts and pints to liters, 312–316 review test, 305–307 volume units, 320–323 yards to meters, 310–312 international system of (S.I.), 289–304 cubic meter, 292–293 kelvin, 296–300 kilogram, 294–296 meter, 290–292 technical (derived) units, 302–304 kelvin (temperature), 296–300 kilogram/gram (weight), 294–296 meter (length and area), 290–292 metric system, 273–288, see also Metric system process, 49–50 technical (derived) units, 302–304 Meter, see also Metric system cubic, 292–293 as S.I. unit, 290–292 Meter–yard conversions, 310–312 Metric–English conversions, 305–324 area units, 317–320 inches to centimeters, 307–310 length units, 316–317 quarts and pints to liters, 312–316 review test, 305–307 volume units, 320–323 yards to meters, 310–312 Metric system, 273–288 common linear measures, 280–281 common weight (mass) measures, 281–283 conversion of measures within, 275–280 Multiplication decimals, 204–206, 268–270 fractions, 179–185 whole numbers, 131–140
N Never-been-there situations, 75 Nonproductive coping, 3
O Ordering, rounding, and changing, 191–199 Output identification, 47–48 Ownership, lack of, 37
P Pareto charts, 57 Pareto diagram, 18 Permanent corrective actions (PCAs) assessment questions, 351–352 choosing and verifying, 93–95 implementation and validation, 95–98 Plots, 17–18 stem and leaf, 17 Powers of ten, 245–251 Preparation step, 5 Problem (task) definition, 11, 83 Problem description, 82–84, 333–340 Problems chronic versus sporadic, 34–35 six key ingredients for corection of, 39 three typical responses to, 35–37 Problem situation, 11, 12 Problem solving basic model, 5–9 as compared with process improvement, 52 data gathering for, 14–25, see also Data gathering definition of, 10–11 design for six sigma (DFSS) approach, 107–115, see also Six sigma (DFSS) approach generalized stages of, 7–8 global (Ford Motor Company, 8D) approach, 61–104, see also Global problem solving (GPS) process key elements in, 33–39 nine common roadblocks to effective, 37–38 quality tools, 55–59 sample for, 25–26 steps of, 5 strategies for, 3 terminology of, 9–25, see also Data gathering; Terminology theoretical aspects, 5–26, see also Theoretical aspects Problem-solving behavior (operation, strategy), defined, 13–14 Problem-solving process, 13 Problem-solving subject, 11
SL316X_frame_INDEX_V. II Page 569 Monday, September 30, 2002 8:20 PM
Index for Volume II Problem statement, 44 Process capability, 50 Process decision program charts, 59 Process flow diagram, 16 Process guidelines, 81, 83–84, 87, 91–92, 94–95, 100–102 Process improvement, as compared with problem solving, 52 Process-improvement cycle, 46–52, see also Team approach Process improvement teams (PITs), 42 Process measurements, 49–50 Process output, 49 Product (solution) defined, 13 Product specifications, 49 Proportion, 240–241 Protocol, 14–15 Purpose statements, 84, 90, 93, 95, 99
Q Quality tools, 55–59 application of, 57–58 inventory of, 55 management, 59 seven basic, 55–57 Quarterly earnings emphasis, 38 Quart/pint–liter conversions, 312–313
R Reamur temperature scale, 296 Recognition, 65, 102–104 assessment questions, 357–358 lack of, 37 Recorder, 80 Recurrence prevention, 15–22, 98–102 assessment questions, 355–356 Recycling of process, 51–52 Relationship diagrams, 59 Reliability at 85% confidence, 72–73 Repeatability and reproducibility (R&R), 111–112 Requirement analysis (RA), 108 Response surface methodology (RSM), 114 Risk assessment, 108 Risk handling, 108 Risk monitoring, 108 Risk planning, 107–108 Root cause and escape point, 4, 69–71 assessment questions, 343–349 identification, 88–92
569
S Sample, importance of, 25–26 Sample size, 72–73 Scatter diagrams, 18, 57 Scientific notation and powers of ten, 245–251 decimal point placement, 247–240 factoring, 245–247 numbers greater than one, 245–247 numbers less than one, 240–251 Self-protective approaches, 38 Sensory Input, 5–6 Six sigma (DFSS) approach, 107–115 design process, 113–115 overview, 107–109 week 1: structuring: goals, objective, and scope, 109–110 week 2: structuring: product-based estimating, 110–112 week 3: controlling the project, 112–113 Snapshot verification, 72–73 Solution (product), 13 SPC verification, 73–74 Sporadic versus chronic problems, 34–35 Square and cube numbers, 218–220 Square and cube roots, 220–226 Square root applications, 231–234 Square root calculation, 225–229 Stem and leaf plots, 17 Stimulus passage, 14 Strategies nonproductive coping, 3 problem solving, 3 reference to others, 4 Subject (problem solver), 12 Subtraction decimals, 201–204 fractions, 161–174 whole numbers, 123–128 Success factors, 92 Synthesis, 108 System analysis and control (SA), 108 Système Internationale (S.I.) units, 289–304, see also International system of units (S.I.)
T Task (problem) defined, 11 Team(s) local (department-improvement) and crossfunctional (process improvement), 42 required attributes of, 43 TEAM acronym, 42
SL316X_frame_INDEX_V. II Page 570 Monday, September 30, 2002 8:20 PM
570
Six Sigma and Beyond: The Implementation Process
Team approach, 22–25, 41–53 general guidelines for, 43 importance of, 41–42 problem-solving models for, 43–46 Team leader, 80 Team members, 80 Team/process flow, 78–81, 331–332 Technical (derived) S.I. units, 302–304 Temperature scales of measurement, 296 S.I. measurements, 296–300 Terminology, 9–14 behavior, operation, or strategy, 13–14 of data gathering, 14–25, see also Data gathering environment, 12–13 problem or task, 11–12 problem situation, 12 problem solving, 10–11 process, 13 product or solution, 13 subject, 12 Theoretical aspects, 5–26 Things gone wrong/right (TGW/TGR), 112 “Thinking aloud” (verbalization), 14 Time, lack of, 37 Tree diagrams, 59
Verification, 5, 71–74, 108 elements of, 71–72 snapshot, 72–73 SPC chart subgroup and sample size, 73–74 Volume metric–English conversions, 320–323 metric measurements, 285–286 S.I. measurements, 292–293
W Weibull analysis, 21–22 Weight (mass) measurements metric, 281–283 S.I., 294–296 5W2H approach, 22–23, 25, 107 Whole numbers addition and substraction, 123–128 multiplication and division, 131–140 value of, 119–121 Work process identification, 49 Work process improvement, 50–51
X Xerox (6-step) system, 43
V Validation, of permanent corrective actions (PCAs), 95–98 Verbalization (“thinking aloud”), 14
Y Yard–meter conversions, 310–312
SL316X_frame_INDEX_V. III Page 571 Monday, September 30, 2002 8:19 PM
Index A
B
Absolute fit, 160 Adjoint of a matrix, 294 Adjusted goodness-of-fit index, 163 Akaike information criterion, 165 Alternative hypothesis, 57 Analysis ANOVA, see Analysis of variance (ANOVA) classification, 109, 155–156, 266–267 cluster, 109, 155–156, 266–267 confirmatory factor, 157 conjoint, 153–154 covariance structure, 157 discriminant, 105–106, 129–136, 152–153 factor, 107–109, 143–144, 155 latent variable, 157–158 logit, 129–131 MANOVA, see Multivariate analysis of variance (MANOVA) MDA, 129, 132, 136 multiple regression, see Multiple regression analysis multivariate, 104, 137–138 primary, 10 secondary, 10 Analysis of variance (ANOVA) assumptions for, 68 between-groups variability, 69, 150 commands, in software, 70–72 definition of, 67, 150 F ratio, 69, 150–151 heteroscedasticity impact on, 141 MANOVA, see Multivariate analysis of variance (MANOVA) one-way, 67, 70 for regression, 97 vs. SSCP, 132 within-groups variability, 68, 150 And set, 197, 201–202 ANOVA, see Analysis of variance (ANOVA) Arithmetic mean, see Mean Autocorrelation, 170–172, 179 Average, 172–173, 189–190; see also Mean
Bar chart, 22, 27–28 Basic variables, 299 Bayes’ rule, 209, 213 Bell curve, 41–50, 253–262 Bernoulli trials, 303–307, 309–310 Beta coefficients, 146 Between-groups variability, 69, 132–133, 150, 155 Bimodal distribution, 30 Binomial distribution, 267–272 in Bernoulli trials, 305 binomial expansion, 232–234 complementary events, 213–214 vs. hypergeometric distribution, 274–275 normal approximation of, 262–265 vs. Poisson distribution, 280–282 Binomial test, 111–112 Bivariate correlations, 84 Blind experiments, 12–13 Boxplot, 39, 141 Box’s M test, 142
C Canonical correlation, 154, 158 Cases concordant, 77–78 definition of, 18 discordant, 78 tied, 78–79, 120 valid, 22 Causal models, 176–180 CDF, see Cumulative distribution function (CDF) Cells, 34, 236–237 Central Limit Theorem (CLT), 45, 47, 266–267 Central tendency, 30, 185–186 Centroid, 130 CFI, 164 Characteristic root, 134 Characteristic vector, 134 Charts, see Plots
571
SL316X_frame_INDEX_V. III Page 572 Monday, September 30, 2002 8:19 PM
572
Six Sigma and Beyond: The Implementation Process
Chi-square fit measure, 112–114, 160–161 in hypothesis-testing process, 65 likelihood-ratio, 160–161 for measures of association, 73–78, 81 in Monte Carlo simulation, 327 noncentrality measure, 161–162 normed, 165 sample size, 161 Classification analysis, 109, 155–156, 266–267 CLT, 45, 47, 266–267 Cluster analysis, 109, 155–156, 266–267 Cochran Q test, 115 Coding schemes, 16–24 Coefficients beta, 146 contingency, 75 correlation, see Correlation eta, 80 normalized, 75 Pearson’s r, 79–80, 84–88, 126–127 phi, 75 Spearman rank, 127–128 uncertainty, 80 Coincident indicator, 178–179 Combinations, 226, 230 Comparative fit index (CFI), 164 Complementary events, 213–214 Complementary set, 198 Concordant cases, 77–78 Conditional probability, 209–210 Confidence interval as cumulative probability, 249 definition of, 48–49 in regression, 96 size of, 51 Confirmatory factor analysis, 157 Conformability, 290 Conjoint analysis, 153–154 Constant-elasticity multiplicative model, 178 Contingency coefficient, 75 Contingency table, 113 Continuity correction, 264–265 Continuous distribution, 247 Continuous probability, 193 Continuous random variables, 245–247 Control group, 13 Control variable, 83 Corner point, 299–300 Correlation assumptions for, 87 bivariate, 84 canonical, 154, 158 vs. chi-square test, 114 vs. covariance, 83–84
cross-validation index for fit, 163 definition of, 84–87 example of, 292–293 Galton’s rank order, 127 for linear dependence, 293 for measurement error check, 98 of multiple coefficients, 88–89 one-tailed tests, 87 Pearson’s r, 79–80, 84–88, 126–127 in regression, 91–97 RMSR for fit, 162 significance level, 87 spurious, 180 techniques for, 125–126 two-tailed tests, 87 Correlogram, 172 Counting rules, 225–226 Covariance Box’s M test, 142 cross-validation index for fit, 163 definition of, 83–84 RMSR for fit, 162 Covariance structure analysis, 157 Cramer’s V, 75 Cross-classification table, 34–35, 73–74 Cross-tabulation table, 34–35, 73–74 Cross-validation index, 162–163 Cumulative distribution function (CDF) vs. cumulative frequency function, 190–191 definition of, 191–192 discrete, 240–243 in Kolmogorov-Smirnov test, 116 for normal distribution, 254–259 of random variables, 245–251 Cumulative frequency function, 190–191
D Data analysis of, 3 coding of, 16–24 collection of, 17–18 definition of, 3 distribution of, 28–29 entry of, 18 examination of, 137 interval, 24, 27, 125 missing, 19 nominal, 23, 25–26 plots of, see Plots quantification of, 17, 23 ratio, 24, 27 seasonality of, 173–174
SL316X_frame_INDEX_V. III Page 573 Monday, September 30, 2002 8:19 PM
Index for Volume III De Morgan’s laws of complements, 198 Degrees of freedom (df) of cross-tabulation, 65 for McNemar test, 115 in structural models, 160 for T distribution, 58 in variance calculation, 33 Dependent variable, 35, 92–93 Determinant, 291–293 Deviation of means, 193 Diagonal matrix, 286–287 Difference set, 199 Differential equation, 256 Dimensional scaling, 156–157 Discordant cases, 78 Discrete cumulative distribution, 240–243 Discrete probability distribution, 238–239, 267–274; see also Probability density function (PDF) Discriminant analysis, 105–106, 129–136, 152–153 Disjointed set, 199, 203 Dispersion, 193 Distribution bimodal, 30 binomial, see Binomial distribution CDF, see Cumulative distribution function (CDF) continuous, 247 of correlations, 87 of data, 28–30 discrete cumulative, 240–243 discrete probability, 238–239, 267–274 F statistic, 137, 152 hypergeometric, 272–276 of means, 45–50, 55, 62 negative skew, 29, 44, 139 normal, 41–50, 253–262 Poisson, 267–268, 279–282 positive skew, 29, 44, 139 of responses, 37 sampling, 38, 94 SND, 257–262, 266–267 T, 57–58, 137 uniform, 250–253 Double blind studies, 13 Duncan’s multiple range test, 72 Durbin-Watson statistic, 179
573 Eigenvalues, 133–134 Eigenvector, 134 Elements, 195 Error measurement, 98, 159 PRE, 75–79, 81 RMSEA, 162 standard, see Standard error type 1, see Type 1 error type 2, 58 variance of, 150 Eta coefficients, 80 Event based dependence, plot of, 71 complementary, 213–214 independent, 209–212 mutually exclusive, 207 simple or elementary, 200 Expected cross-validation index (ECVI), 162–163 Expected frequencies, 65 Expected value, 244–245, 318 Experiments, 10, 63 Exponential smoothing, 173–174 Exponential trend, 169–170 Extrapolation methods, 169–179 Extreme outliers, 39
F F statistic test for ANOVA, 71 definition of, 69, 150–151 distribution requirement, 137 for MANOVA, 152 for regression, 97 Scheffe’s test, 125 vs. T test, 58, 97 using Wilks’ lambda, 135 Factor, 153 Factor analysis, 107–109, 143–144, 155 Feasibility region, 297–301 Fisher exact probability test, 113 Fit, 96–98, 112–114, 159–165 Fixed format, 18 Forecasting, 169–180 Forms, 15–18 Formulas, 183–194 Frequency table, 21, 34 Friedman test, 120–121
E Econometric models, 176–180 ECVI, 162–163 Effect size, 64
G Galton’s rank order correlation, 127
SL316X_frame_INDEX_V. III Page 574 Monday, September 30, 2002 8:19 PM
574
Six Sigma and Beyond: The Implementation Process
Game strategies, 315–318 Goodman and Kruskal’s Gamma, 78–79 Goodness-of-fit, 96–98, 112–114, 159–165
H Heteroscedasticity, 71, 141–142 Histogram, 27–30, 71, 138 Holt’s method, 173–175 Homoscedasticity, 140–142 Hotelling’s T2, 151–152 Hypergeometric distribution, 272–276 Hypotheses alternative vs. null, 57 ANOVA, 72, 151 definition of, 52 MANOVA, 151 measures of association, 80 null, 55, 57 regression lines, 95 testing of, 53–55, 61–63, 65
I Identification number, 17 Identity matrix, 287 IFI, 164 Increment, 27 Incremental fit index (IFI), 164 Independent event, 209–212 Independent variable, 35 Index adjusted goodness-of-fit, 163 comparative fit, 164 cross-validation, 163 expected cross-validation, 162–163 of goodness-of-fit, 162–163 incremental fit, 164 nonnormed fit, 163 normed fit, 163–164 parsimonious goodness-of-fit, 165 parsimonious normed fit, 164–165 relative fit, 164 Tucker-Lewis, 163 Indicators, 178–179 Interaction, 70 Intercept, 92–96 Interdependence, 157 Interquartile range (IQR), 39 Interval data, 24, 27, 125 Intervening variable, 57 IQR, 39
J Joint probability, 209–212 Judgment sample, 8
K Kendall’s Tau-b, 79 Kolmogorov-Smirnov test, 115–116, 139 Kruskal-Wallis test, 119–120 Kurtosis, 44, 138–139
L Lagrange multipliers, 319–323 Lambda in chi-square test, see Chi-square Lagrange multiplier, 319–323 Wilks, 134–136, 293 Latent variable analysis, 157–158 Leading indicator, 178–179 Leading tail interval, 248–249 Learning effect on experiments, 63 Least significant difference, 72 Least squares, 91–92, 179 Level, 153, 173 Levene test, 141–142 Likelihood-ratio chi-square statistic, 160–161 Limiting transition matrix, 313–314 Linear combination, 105 Linear dependence, 293 Linearity, 81–83, 103, 142 LISREL analysis, 157–158 Log-linear models, 106–107 Logistic regression, 129–131 Logit analysis, 129–131
M Manifest variables, 158 Mann-Whitney U test, 116–117 Mapping, 156–157 Marginal totals, 34 Markov chains, 309–314 Matrices adjoint of, 294 algebra for, 285–296 diagonal, 286–287 identity, 287 limiting transition, 313–314 nonsingular, 293 singular, 293, 295
SL316X_frame_INDEX_V. III Page 575 Monday, September 30, 2002 8:19 PM
Index for Volume III step transition, 310–314 symmetric, 286 Matrix algebra, 285–296 Maximum specification, 27 McNemar test, 114–115 MDA, 129, 132, 136 MDS, 156–157 Mean central tendency measure, 30–31 definition of, 30, 35 distribution of means, 45–50 formulas for, 189–190 of frequency grouped data, 188 location, in boxplot, 39 of means, 266 in normal distribution, 41 of PDF, 188–190 of a population, 38 in random variable range, 244–245 of residuals, 100 standard error of, 46–48 statistical formulas for, 184–185 Mean square, 97 Measurement error, 98, 159 Measures of association, 74–81, 134–135 Median, 26, 29–31, 185 Mild outliers, 39 Minimum specification, 27 Mode, 26, 29–31, 185 Modeling causal models, 176–180 constant-elasticity multiplicative model, 178 curvilinear, 145 econometric, 176–180 log-linear, 106–107 null, 163 SEM, 157–158 structural, 160 Modified least significant difference, 72 Monte Carlo simulation, 325–328 Moving averages, 172–173 Multicollinearity, 146–148 Multidimensional scaling (MDS), 156–157 Multiple comparison procedures, 70 Multiple discriminant analysis (MDA), 129, 132, 136; see also Discriminant analysis Multiple linear regression, 104–105 Multiple regression analysis vs. canonical correlation, 154 commands, in software, 98–99 definition of, 144–148 vs. discriminant analysis, 130, 136 Multivariate analysis, 104, 137–138 Multivariate analysis of variance (MANOVA) vs. ANOVA, 151
575 assumption testing, 137 definition of, 109, 148 vs. discriminant analysis, 130–131, 152–153 heteroscedasticity impact on, 141 Hotelling’s T2 , 150–151 vs. SEM, 158 Wilks’ lambda test, 136 Mutually exclusive events, 207
N NCP, 161–162 Negative relationship of variables, 77–78, 83–85 Negative skew distribution, 29, 44, 139 NFI, 163–164 NNFI, 163 Nominal data, 23, 25–26 Noncentrality parameter (NCP), 161–162 Nonlinearity, plot of, 71 Nonnormed fit index (NNFI), 163 Nonsingular matrix, 293 Normality, 137–140 normal distribution, 41–50, 253–262 plots of, 29, 71, 138 of residuals, 102–103 SND, 257–262, 266–267 Normalized coefficients, 75 Normed fit index (NFI), 163–164 Null hypothesis, 55, 57, 151 Null model, 163 Null plot, 71 Null set, 195 Numerical taxonomy, see Cluster analysis
O Objective dimensions, 157 Observations, 103–104 Observed frequencies, 65 Observed significance level, 55 One-tailed test, 53, 60–61, 87 One-way analysis of variance, 67, 70 Open-ended questions, 16 Optimization, 315–316 Or set, 197, 202 Ordinal data, 23, 26, 77, 81 Origin, 96, 299 Outliers, 31, 39, 100–102
P Paired experimental designs, 63
SL316X_frame_INDEX_V. III Page 576 Monday, September 30, 2002 8:19 PM
576
Six Sigma and Beyond: The Implementation Process
Pairwise deletion, 88 Parallel system, 197, 217–219 Parameter, 38 Parsimonious goodness-of-fit index (PGFI), 165 Parsimonious normed fit index (PNFI), 164–165 Pascal’s triangle, 233 PDF, see Probability density function (PDF) Pearson’s r coefficient, 79–80, 84–88, 126–127 Perceived dimensions, 157 Percentages, 21–22, 38, 43, 52 Percentiles, 30 Perceptual mapping, 156–157 Permutations, 226–229 PGFI, 165 Phi coefficient, 75 Placebo, 12, 60 Plots, 186–187 bar charts, 22, 27–28 boxplot, 39, 141 correlogram, 172 of event-based dependence, 71 histogram, 27–30, 71, 138 for measures of association, 81–83 nonlinearity, 71 null, 71 probability, 138–139 of residuals, 71 scatterplot, 83–84, 126 Time-based dependence, 71 PNFI, 164–165 Poisson distribution, 267–268, 279–282 Pooled within-groups, see Within-groups Population, 6, 62–63, 94 Positive relationship of variables, 77–78, 83–85 Positive skew distribution, 29, 44, 139 Power of a test, 64 PRE, 75–79, 81 Primary analysis, 10 Principal components, 158 Principal diagonal, 286 Probability concepts of, 203–223, 276–277 conditional, 209–210 continuous, 193 cumulative, 249 discrete distribution, 238–239, 267–274 of exceeding threshold, 192 Fisher exact probability test, 113 joint, 209–212 PDF, see Probability density function (PDF) plot of, 138–139 total, 206–207 transition, 310–314
Probability density function (PDF), 187; see also Confidence interval discrete probability distribution, 238–239 expected values and, 194 mean of, 188–190 for normal distribution, 254–257 with random variables, 242–253 Probability plot, 138–139 Product rule for series, 214 Proportional reduction in error (PRE), 75–79, 81
Q Q analysis, see Cluster analysis Q test, 115 Questionnaire, 15–18
R R test, 79–80, 84–88, 126–127 Rack and stack, 185 Random sample, 8, 10–12 Random variable, 235–282 Randomized strategies, 317–318 Range, 32, 39 Ratio data, 24, 27 Regression, 91–109 vs. chi-square test, 114 coefficient composition, 159 curvilinear modeling of, 145 vs. discriminant analysis, 130–131 estimation of, by SEM, 158 exponential trend, 169–170 logistic, 129–131 multiple variables, see Multiple regression plots of, 82 Relationships, 81–83 Relative fit index (RFI), 164 Reliability, predictor, 159, 216, 219–223 Reports, statistical, 329–330 Residuals autocorrection of, 179 definition of, 71, 99 in regression, 97, 99–104 standardization of, 100–101 RFI, 164 Rho test, see Spearman rank coefficient RMSEA, 162 RMSR, 162 Root mean square error of approximation (RMSEA), 162 Root mean square residual (RMSR), 162
SL316X_frame_INDEX_V. III Page 577 Monday, September 30, 2002 8:19 PM
Index for Volume III Run, 18 Running average, 189–190
S Sample definition of, 8 judgment, 8 random, 8, 10–12, 236 size of, 64–65, 123–124 Sample space, 199–200 Sampling, 226 Sampling distribution, 38, 94 Scalar, 288 Scaled noncentrality parameter (SNCP), 161 Scatterplot, 83–84, 126 Scheffe’s test, 72, 125 Seasonality, 173–174 Secondary analysis, 10 SEM, 157–158 Sequence tree diagram, 223 Series system, 197, 214–217 Set theory, 195–223 Sets and, 197, 201–202 complementary, 198 difference, 199 disjointed, 199, 203 null set, 195 or, 197, 202 subsets, 196 universal, 195–196 Shapiro-Wilks test, 139 Sign test, 117–118 Significance level, see Observed significance level Simplex method, 297–301 Simulated survey, 37 Single blind studies, 13 Singular matrix, 293, 295 Singularity, 147 Six sigma, 44 Skewness, 139 Slack variables, 299 Slope, 92–96 SNCP, 161 SND, 257–262, 266–267 Somers’ d, 79 Spatial map, 156 Spearman rank coefficient, 127–128 SPSS software, 44, 77 Spurious correlation, 180 SSCP, 132–133, 290–291 Standard deviation, 33, 194
577 Standard error of autocorrelation, 172 of the difference, 52–54, 149 of the mean, 46–48 in regression, 95–98 Standard score, 43–44 Standardized normal distribution (SND), 257–262, 266–267 Standardized values definition of, 52 of random variables, 248, 256–257 for regression coefficients, 146 Statistical Process Control, 44–45 Statistical reports, 329–330 Statistics, 38 Step transition matrix, 310–314 Stimulus, 153 Stirling’s approximation to n!, 225 Strategies, 315–318 Structural equation modeling (SEM), 157–158 Student-Newman-Keul’s test, 72 Studies, 9 Subjective dimensions, 157 Subsets, 196 Sums of squares of column vector, 288 definition of, 97 SSCP, 132–133, 290–291 for univariate analysis, 134–135 Sums of squares and cross products (SSCP), 132–133, 290–291 Survey, 9 Survey, simulated, 37 Symmetric matrix, 286 System-missing value, 19
T T distribution, 57–58 T test definition of, 121–125, 148–150 distribution requirement, 137 vs. F test, 97 interpretation of, 58–59 vs. Mann-Whitney U test, 117 vs. MANOVA, 151–152 vs. multiple comparison procedures, 70 in regression, 96 variance estimate, 58 Tables contingency, 113 cross-classification, 34–35, 73–74 cross-tabulation, 34–35, 73–74 frequency, 21, 34
SL316X_frame_INDEX_V. III Page 578 Monday, September 30, 2002 8:19 PM
578
Six Sigma and Beyond: The Implementation Process
Tau-b, 79 Tau-c, 79 Tests binomial, 111–112 Box’s M test, 142 Duncan’s multiple range, 72 F statistic, 152 Fisher exact probability, 113 Friedman, 120–121 Kolmogorov-Smirnov, 115–116, 139 Kruskal-Wallis, 119–120 Levene, 141–142 Mann-Whitney U, 116–117 McNemar, 114–115 measures of association, see Measures of association one-tailed, 53, 87 Pearson’s r, 79–80, 84–88, 126–127 power of, 64 of proportion, 111–112 Q, 115 rho, 127–128 Scheffe, 72, 125 Shapiro-Wilks, 139 sign, 117–118 Student-Newman-Keul’s, 72 T, see T test two-tailed, see Two-tailed test Wilcoxon signed-ranks, 118–119 Threshold, 190–192 Tied cases, 78–79, 120 Time-based dependence, plot of, 71 Tolerance, 148 Total probability, 206–207 Trailing tail interval, 249 Transformations, 102–103, 144–146, 267 Transition probability, 310–314 Transpose, 286 Treatment, 149, 153 Tucker-Lewis index, 163 Tukey, 72 Two-tailed test chi-square, 112 Cochran Q test, 115 for correlation coefficients, 87 definition of, 53 McNemar test, 115 vs. one-tailed test, 60–61 Type 1 error, 58, 64, 149–150, 152 Type 2 error, 58 Typology construction, see Cluster analysis
U Uncertainty coefficient, 80 Uniform distribution, 250–253 Universal set, 195–196 Unreliability, 216, 219–223 User-missing data, 19 Utility, 153
V Valid cases, 22 Variables basic, 299 continuous random, 245–247 control, 83 definition of, 23–24 independent, 35 interval, 24, 27, 125 intervening, 57 latent, 157–158 manifest, 158 negative relationship of, 77–78, 83–85 nominal, 23, 25–26 ordinal, 23, 26, 77, 81 positive relationship of, 77–78, 83–85 random, 235–282 ratio, 24, 27 relationship of, 81 selection methods, 105 slack, 299 in statistical formulas, 183–194, 235–282 transformation of, 102–103 Variance analysis of, see Analysis of variance (ANOVA) definition of, 32–33 inflation factor, 148 multivariate analysis of, see Multivariate analysis of variance (MANOVA) statistical formulas for, 184, 193 Vectors, 134, 286, 288 Volunteers, 9
W Wilcoxon signed-ranks test, 118–119 Wilks’ lambda, 134–136, 293 Winter’s method, 173, 176 Within-groups, 68, 132–133, 150, 155
SL316X_frame_INDEX_V. IV Page 579 Monday, September 30, 2002 8:19 PM
Index A
C
Accuracy of measurement systems, 326–328 Activities prioritizing of, 32 streamlining of, 33 Advanced product quality planning, 33 Affinity diagrams, 22–24 Analysis of variance definition of, 40 measurement error analysis, 355–356 output table, 41 problem solving applications of, 40 purpose of, 40 R chart vs., 211 sum of squares, 41 tolerance design using, 40–43 xbar chart vs., 211 Anderson-Darling test statistic description of, 92 normality testing, 92–94 Appraisers, 335 Area under curve, for normal distribution, 74, 79, 426–429 Attribute data charts for, 191–196, 276, 459 description of, 117, 155 sample frequency for, 277 short-run control charts for, 289–291, 294–295 Attribute gage study, 342 Attribute-type control charts, 125–126
Capability machine analysis of forms for, 244–248 results, 249–253 setup verification, 250 definition of, 235 Minitab testing batch files, 392–393 check, 390 description of, 388 setting up program, 389 SQC loading, 391–392 typical session, 393–395 short-term, 235, 326 process analysis of data collection sheet, 460 exponential distribution for, 305–307 non-normal distribution for, 300–305 sheet for, 461 control charts and specifications, 235 definition of, 231, 235 description of, 104–105, 107, 128–129 determining of, 234–235 illustration of, 318 indices, 253, 257–261 machine acceptance process, 377–382 normal distribution, 462 ongoing, 253, 257 overview of, 231–234 preliminary, 253, 257 process control and, 231–234 short-run statistical process control, 297 6-sigma, 263–264 skewed distribution, 463 statistical control and, 234–235 studies of control limit method, 236–243 purpose of, 231 results, 249–253 trending processes and, 309–315 Categorical distributions, 50 Cause and effect diagrams, 20–22
B Bell curve, 54 Benchmarking, 37–40 Bias of measurement systems, 328 Bimodal curve, 371 Box plots, 59–60 Brainstorming affinity diagrams for, 22–24 description of, 18–20 example of, 23–24
579
SL316X_frame_INDEX_V. IV Page 580 Monday, September 30, 2002 8:19 PM
580
Six Sigma and Beyond: The Implementation Process
c chart constructing of, 166–171 control limits, 167–168, 171 process center line, 166–167 process control, 168–171 sampling plan for, 166 uses of, 166 Centerline, 106 Central limit theorem, 80, 113 Charts, see Control charts; Variable charts Chi-square distribution, 431–432 Chi-square goodness-of-fit test applications of, 85–86 Cochran’s procedure for, 87–91 description of, 85 exponentiality testing, 90–91 normality testing, 88–90 standard type of, 86–87 Class boundaries, 53 Class frequencies, 53 Class intervals, 53 Class limits, 53 Class marks, 53 Coding, 269 Common variation control charts for detecting cause of, 106–107 definition of, 100–102 description of, 214–215 trending processes, 308 Constant errors, 330 Contingency tables, 210–211 Continual improvement ongoing nature of, 10 reason for, 11 statistical process control for, 5–6 Continuous probability distribution, 411 Control charts attribute-type, 125–126, 458 c chart constructing of, 166–171 control limits, 167–168, 171 process center line, 166–167 process control, 168–171 sampling plan for, 166 uses of, 166 centerline of, 106 components of, 105–106 construction of, 420 contingency tables vs., 210–211 control limits, 105–106 cumulative summation, 150–153 data attribute-type, 127 collection of, 126–127 control limits calculated from, 127
description of, 117 summarization of, 51 variable-type, 126 definition of, 99, 129 description of, 4–5, 103 development of data, 125–127 description of, 123–124 normalization of process, 124 overview of, 124 process capability, 128–129 process improvement, 129 process variation interpreted for stability and control, 127–128 forms, 129–131 funnel experiment description of, 108 purpose of, 108 rules for, 108–111 geometric moving average, 205–207 goals of, 106–107 history of, 213 hypothesis testing and, 222–223 level of, 106 log sheet, 459 measurement stability and, 344–345 moving-average, 204 multivariate, 210 natural process limits, 106 normal distribution and, relationship between, 113–114 NP chart construction of, 162–165 control limits, 163–165 description of, 162 overview of, 117 P charts advantages and disadvantages of, 155 constructing of control limits, 158–159, 162 plotting of proportions, 158 process center line, 156–157, 159 process control analysis, 159–161 proportion of nonconforming parts, 156 sample plan, 156 standard deviation calculations, 157–158 variation, 162 vertical scale, 158 process control, 159–161 PRE-control, 207–210 preliminary issues regarding, 122–123 process capability assessed using, 104–105, 107, 235
SL316X_frame_INDEX_V. IV Page 581 Monday, September 30, 2002 8:19 PM
Index for Volume IV
process improvements designed from, 107 purpose of description of, 51, 99, 104 stating of, 124 quality characteristics, 210 red bead experiment, 111–113 sampling considerations for, 118–119 frequency of sample, 119–121 plan for, 118 rational samples, 118–119 rational subsamples, 119 short-run, 269, 294–295 sloping center lines, 148, 150 statistical alternatives, 210–211 tool wear, 201 trending processes, 307 types of, 125 u chart constructing of, 172–175 uses of, 172 variable, 125–126, 420, 457 variation causes detected using common, 106–107 description of, 104 mathematical theorems used, 113–114 weighted average, 204–205 Control limits c chart, 167–168, 171 description of, 105–106 NP chart, 163–165 points beyond, 215–217 process capability studies using, 236–243 process data for calculating, 127 R chart, 134–135 recalculating of, 295 u chart, 174 Xbar chart, 134–135 Cost of quality, 30 Could cost, 34 c plot point, 289–290 Cumulative frequency distribution, 56 Cumulative summation control charts, 150–153 Customer "insight" of, 6 requirements of, 6 satisfaction of, 9
D Data attribute charts for, 191–196, 276, 459 description of, 117, 155
581
sample frequency for, 277 short-run control charts for, 289–291, 294–295 characterization of, 4 collection of, 50–51 non-normal, 299 organizing of, 50–51 plotting on normal probability paper, 372 reorganizing of, 49 summarization techniques for categorical distributions, 50 characteristics of, 51 frequency distribution, 49, 52 histogram, 52–54 numerical distributions, 49, 52 overview of, 49–50 qualitative distributions, 50 quantitative distributions, 49 variable control charts for, 420 description of, 117, 155, 276 SQC session, 395–397 % Defective, 440–453 Defective products, 232 Defects chart, 175–177 Deming’s management principles, 423–424 Descriptive statistics measures of central tendency mean, 65–66 median, 66–67 mode, 67 measures of dispersion description of, 68 mean interpretation using, 68 range, 68–69 standard deviation, 69–72 overview of, 65 Design of experiments, 46–47 Difference control charts description of, 196–197 lot plot method, 197–198 Distance tests, 91–94 Distribution bimodal, 371 categorical, 50 chi-square, 431–432 continuous probability, 411 cumulative frequency, 56 density function of, 411 discrete probability, 411 f, 433–434 frequency cumulative, 56 definition of, 49, 52 example of, 53
SL316X_frame_INDEX_V. IV Page 582 Monday, September 30, 2002 8:19 PM
582
Six Sigma and Beyond: The Implementation Process
gamma, 413–414 gamma1, 436 gamma2, 437 mode of, 302 non-normal, 300–305 normal area under normal curve, 74, 79, 426–429 characteristics of, 73–75 control charts and, relationship between, 113–114 curvature of, 73–74 description of, 73 mean of, 73 moment tests for, 94–97 overview of, 73 standard deviation, 69 standardized values, 75–78 normal probability function, 370 numerical, 49, 52 Poisson, 412 probability, 78 qualitative, 50 quantitative, 49 skewed capability sheet for, 463 description of, 300–301, 303 symmetric, 302–303 t, 430 Dot plots, 58 DPMO, 440–453
E Edlin software, 399–400 Evolutionary operation, 211 Experiments design of, 46–47 funnel description of, 108 purpose of, 108 rules for, 108–111 Exponential distribution analyzing of, 305–307 process capability analysis using, 305–307 values, 435 Exponentiality testing chi-square goodness-of-fit test for, 90–91 Shapiro-Wilk W test for, 83–85 tests, 421 Exponentially weighted moving average chart, 205–207 Extreme outliers, 60
F False alarms, 222–223 f distribution, 433–434 Five whys, 35–36 Force field analysis, 35, 37 Frequency distribution cumulative, 56 definition of, 49, 52 example of, 53 Frequency polygon, 57 Funnel experiment description of, 108 purpose of, 108 rules for, 108–111
G Gamma distribution, 413–414 Gauge capability, 319–320, 331 Gauge repeatability and reproducibility study, 333 Geometric moving average control chart, 205–207 Global problem solving, 43–44 Goodness-of-fit tests chi-square applications of, 85–86 Cochran’s procedure for, 87–91 description of, 85 exponentiality testing, 90–91 normality testing, 88–90 standard type of, 86–87 Cochran’s procedure for, 87–91 linearity assumptions, 328 types of, 81 Graphical presentations box plots, 59–60 description of, 56 dot plots, 58 frequency polygon, 57 histogram construction of, 54–57 definition of, 52 elements of, 52–53 illustration of, 54 reasons for using, 54 shape of, 54 Ogive curve, 57 scatter diagrams construction of, 60–61 description of, 60 reading of, 62–64 shapes of, 62–64 stem and leaf displays, 58–59
SL316X_frame_INDEX_V. IV Page 583 Monday, September 30, 2002 8:19 PM
Index for Volume IV
H Histogram construction of, 54–57, 301 definition of, 52, 73 elements of, 52–53 illustration of, 54 reasons for using, 54 shape of, 54 Hotelling statistic, 210 Hypothesis testing, 222–223
I In-control process, 99–102 Input–output analysis, 27–29 Inspections, 31
J Just-in-time inventory systems, 265
K Kolmogorov-Smirnov test, 92 Kurtosis index, 95–96
L Leptokurtic, 95 Linearity, 328–330 Lot plot method, for difference control charts, 197–198 Lower control limit, 103 Lower specification limit, 304, 306
M Machine acceptance process description of, 363 Edlin software, 399–400 machine warmup, 373–377 Minitab testing of capability batch files, 392–393 check, 390 description of, 388 output file storing and naming, 400–402 setting up program, 389 SQC loading, 391–392 typical session, 393–395 output file storing and naming, 400–402
583
overview of, 363–365 phase I, 373–377 phase II, 377–382 phase III, 382–387 process potential study design characterizations for, 365–366 distributions, 368, 370 normal probability paper, 366–372 process stability and capability, 377–382 SQC software data, 397–399 loading of, 391–392 output file storing and naming, 400–402 variable data, 395–397 supplier grade sheet, 363 suppliers’ previous studies, 373–377 variation, 365 yes/no for acceptance, 382–387 Machine capability analysis of forms for, 244–248 results, 249–253 setup verification, 250 definition of, 235 Minitab testing batch files, 392–393 check, 390 description of, 388 setting up program, 389 SQC loading, 391–392 typical session, 393–395 short-term, 235, 326 Management commitment to statistical process control, 12 role of, 111 MaxSPA lines, 310–311 Mean calculating of, 66, 71–72, 337 definition of, 65–66 measures of dispersion for interpreting, 68 Measurement errors analysis of analysis of variance, 355–356 attribute gage study, 342 computer software for, 357–360 data collection, 333–341, 346 description of, 333 findings, 342–344, 356–357 short-method R&R study, 341–342 statistical methods and, 345–357 causes of, 318 concepts regarding, 319–321 constant, 330 equipment-related, 318 gauge capability, 319–320, 331
SL316X_frame_INDEX_V. IV Page 584 Monday, September 30, 2002 8:19 PM
584
Six Sigma and Beyond: The Implementation Process
indicators of, 321 part size and, 352 standard deviation of, 331 terminology associated with, 321–332 variable, 330 Measurement systems accuracy, 326–328 assessment of, 320–321 bias, 328 capability in, 321, 348 components of, 318 control in, 321 description of, 317 discrimination of, 322 environment, 319 equipment for, 318–319 errors, see Measurement errors linearity of, 328–330 overview of, 317–319 precision of, 322–323, 327 process nature of, 320 P/T ratio, 331–332, 354 repeatability, 323–324 reproducibility, 323–325 sensitivity of, 322 stability control charts and, 344–345 description of, 326 studies of, 321–322 true value of, 322 uniformity of, 322 variation description of, 317, 319 special, 320–321 Measures of central tendency mean, 65–66 median, 66–67 mode, 67 Measures of dispersion description of, 68 mean interpretation using, 68 range, 68–69 standard deviation, 69–72 Median, 66–67 Median and range chart advantages and disadvantages of, 179–180 attribute data, 191–196 constructing of control limits, 181–182, 191 median calculations, 180, 182 process center line, 180–182 process control analysis, 182–190 range calculations, 180, 182 recording of measurements, 180 sampling frequency and size, 180
variation, 191 vertical scale, 182 Mesokurtic, 95 Minitab testing, for machine acceptance process batch files, 392–393 check, 390 description of, 388 setting up program, 389 SQC loading, 391–392 typical session, 393–395 MinSPA lines, 310–311 Mirror image technique, 301–305 Mode, 67 Modified controlled charts, 198–200 Moment tests, 94–97 Motorola’s 6 Sigma capability, 263–264 description of, 261–263 Moving average and moving range chart description of, 204 nominal, 287–288 short-run, 289 target, 288 Moving range chart, 145–147 Multivariate control charts, 210
N Narrow-limit gaging, 209 Natural process limits, 106 Nominal group technique, 29 Nonconformance, 31 Nonconforming units, 276, 295 Nonmanufacturing metrics, 406 overview of, 405 process improvement, 410 reaction vs. project planning, 407 statistical process control criteria for success, 405–407 implementation strategy, 408–409 statistical inventory control, 410–417 tools, 409 Normal distribution area under normal curve, 74, 79, 426–429 capability sheet, 462 characteristics of, 73–75 control charts and, relationship between, 113–114 curvature of, 73–74 description of, 73 mean of, 73 moment tests for, 94–97 overview of, 73
SL316X_frame_INDEX_V. IV Page 585 Monday, September 30, 2002 8:19 PM
Index for Volume IV
standard deviation, 69 standardized values, 75–78 Normality Anderson-Darling test for, 92–94 chi-square goodness-of-fit test for, 86–90 Shapiro-Wilk W test for, 83 testing for, 421 Normal probability paper, 366–372 NP chart construction of, 162–165 control limits, 163–165 description of, 162 np plot point, 289 Numerical distributions, 49, 52
O Ogive curve, 57 Out-of-control conditions distribution for, 299–300 not capable process and, 231–232 short-run charts for, 291–292 special variation cycle of points, 219 definition of, 215 description of, 215, 224–225 points beyond control limits, 215–217 run of seven points, 217 trend of seven points, 217–219 unusual variation, 219–221 unknown capable process and, 232, 234
P Pareto diagram, 24–25, 51 Part number control, 292 p charts advantages and disadvantages of, 155 constructing of control limits, 158–159, 162 plotting of proportions, 158 process center line, 156–157, 159 process control analysis, 159–161 proportion of nonconforming parts, 156 sample plan, 156 standard deviation calculations, 157–158 variation, 162 vertical scale, 158 process control, 159–161 Plan–do–study–act model, 7 Platykurtic, 95 Poisson distribution, 412 Pooling, 42
585
p plot point, 289 Precision error, 330 Precision of measurement systems, 322–323, 327 PRE-control, 207–210 Probability distributions description of, 78 types of, 78 Probability plotting, 82 Problem solving assumptions for, 44–45 barriers to, 47–48 continual nature of, 44 global, 43–44 overview of, 17 steps involved in, 45–46 teamwork approach concept of, 17–18 discipline necessary for, 17 employee role and responsibilities, 18 tools and techniques for activity streamlining, 33 affinity diagrams, 22–24 analysis of variance, 40 benchmarking, 37–40 brainstorming, 18–20 cause and effect diagrams, 20–22 cost of quality, 30 could cost, 34 design of experiments, 46–47 five whys, 35–36 force field analysis, 35, 37 input–output analysis, 27–29 nominal group technique, 29 Pareto diagram, 24–25 product nonconformance reduction, 30–31 quality function deployment, 29–30 regression analysis, 46 reliability and maintainability, 34–35 time management, 31–32 value engineering, 34 work flow analysis, 25–27 Process in-control, 99–103, 128 out-of-control, 102–104 questions for analyzing, 123 stability of, 299 statistical control of, 99–101 trending of, 307–315 variation of, 127–128 Process capability analysis of data collection sheet, 460 exponential distribution for, 305–307 non-normal distribution for, 300–305
SL316X_frame_INDEX_V. IV Page 586 Monday, September 30, 2002 8:19 PM
586
Six Sigma and Beyond: The Implementation Process
sheet for, 461 control charts and specifications, 235 definition of, 231, 235 description of, 104–105, 107, 128–129 determining of, 234–235 illustration of, 318 indices, 253, 257–261 machine acceptance process, 377–382 normal distribution, 462 ongoing, 253, 257 overview of, 231–234 preliminary, 253, 257 process control and, 231–234 short-run statistical process control, 297 6-sigma, 263–264 skewed distribution, 463 statistical control and, 234–235 studies of control limit method, 236–243 purpose of, 231 results, 249–253 trending processes and, 309–315 Process control capability and, 231–234 control chart application to, 104–105 ideal, 16 ownership role and responsibilities, 122 product control vs., 13 quality characteristics used, 210–211 requirements for, 101 short-run charts interpreted for, 292 statistical, see Statistical process control Xbar chart, 136–137, 141 Process potential study, for machine acceptance process design characterizations for, 365–366 distributions, 368, 370 normal probability paper, 366–372 Process stream effect, 371 Process teams, 14 Product control, 3, 13 Productivity, quality vs., 12 Product nonconformance reduction, 30–31 P/T ratio, 331–332, 354
Q Qualitative distributions, 50 Quality characteristics of, 210 cost of quality program, 30 defining of, 13 improvement ongoing nature of, 10
productivity vs., 12 statistical process control for, 5–6 Quality function deployment, 29–30 Quality loss function, 40 Quantitative distributions, 49
R Randomness, 439 R and Xbar chart advantages and disadvantages of, 133 causes that affect, 216, 218–220 construction of, 133–145 control limits, 134–135 illustration of, 237 moving range chart vs., 145–146 nominal, 278–279 process control, 136–137 sampling frequency and size, 133–134 short-run, 281–282, 293 short-run statistical process control, 278–279 target, 280–281 vertical scale, 136 Range, 68–69, 337 Rational samples, 118–119 Rational subsamples, 119 Red bead experiment, 111–113 Regression analysis, 46 Regression tests Shapiro-Wilk W test for exponentiality, 83–85 for normality, 83 underlying distribution detection using, 82–83 Reliability and maintainability description of, 34–35 problem solving using, 34–35 Repeatability errors definition of, 323–324 spread caused by, 339 standard deviation calculations, 351, 354 tolerance caused by, 339–340 variation caused by, 338, 352 Reproducibility errors description of, 323–325 spread caused by, 339 tolerance caused by, 340 variation caused by, 338–339 Risk reduction, 33 Robust, 6 Runs statistical analysis of, 225–227 test of, 221–222
SL316X_frame_INDEX_V. IV Page 587 Monday, September 30, 2002 8:19 PM
Index for Volume IV
S Sample size guidelines for, 438 median and range chart, 180 Xbar and R chart, 133–134 Scatter diagrams construction of, 60–61 description of, 60, 274 reading of, 62–64 shapes of, 62–64 s chart, 148 Sensitivity, 322 Shapiro-Wilk W test for exponentiality, 83–85 for normality, 83 Shewhart charts, 265 Short-run statistical process control attempts at, 266–267 capability measurements, 297 charts attribute data, 289–291 description of, 277 interpretation of, 291–295 nominal MA and MR chart, 287–288 nominal X and MR chart, 284–285 nominal Xbar and R, 278–279 out-of-control conditions, 291–292 short-run MA and MR chart, 289 short-run X and MR chart, 286–287 short-run Xbar and R, 281–282, 294 short-run Xbar and s chart, 282–283 standardized Xbar and s chart, 283–284 target MA and MR chart, 288 target X and MR chart, 285–286 target Xbar and R, 280–281 coded data, 268–272 control limits, 295 data collection sheet, 268 definition of, 266 overview of, 265–266 prior control charts for, 272 sampling for, 276–277 statistical values, 273 target values, 295–297 target values for, 272–276 traditional statistical process control applied to, 265–266 6 Sigma capability, 263–264 description of, 261–263 service organizations, 417 Skewed distribution, 300–301 Skewness calculation of, 96
587
definition of, 95 Special variation description of, 102–103 in measurement system, 320 signals that indicate cycle of points, 219 description of, 215, 224–225 points beyond control limits, 215–217 run of seven points, 217 runs test for, 221–222 trend of seven points, 217–219 unusual variation, 219–221 trending processes, 308–309 S-shaped plot, 371 Stability of measurement systems, 326 Standard deviation calculating of, 70–72 definition of, 69 estimating of, 236, 238 measurement errors, 331 repeatability errors, 351, 354 Standardized ratio, 306 Standardized values, 75–78 Statistical data analysis, 376–377 Statistical inventory control, 410–417 Statistical process control applications of, 1 characteristics of, 7–8 charts used in, 2 commitment required for, 10 control in, 213 definition of, 1, 44 description of, 417 effectiveness of, 12 elements of, 2–3 emphasis of, 9 factors that affect, 10–11 failure of, 10–11 flow process control form, 456 implementation of, 12–16, 44–46 individual responsibilities for, 3–4 managerial commitment, 12 mathematical theorems used in, 113–114 model of, 7–8, 13–15 ongoing nature of, 1–2 overview of, 7–11 pilot phase of, 14–15 problems associated with, 15–16 process teams for, 14 purpose of, 2, 17, 44 quality and improvement benefits, 5–6 requirements for, 16 short-run, see Short-run statistical process control steering committee for, 13–14
SL316X_frame_INDEX_V. IV Page 588 Monday, September 30, 2002 8:19 PM
588
Six Sigma and Beyond: The Implementation Process
Steering committee, 13–14 Stem and leaf displays, 58–59 Summary statistics, 348–351 Sum of squares, 41 Supplier grade sheet, 363 Supply chain management, 34 Symmetric distribution, 302–303 Synergy, 18 System errors causes of, 9 description of, 8–9
T Tampering description of, 108 funnel experiment to demonstrate description of, 108 purpose of, 108 rules for, 108–111 t distribution, 430 Teamwork brainstorming, 18–20 concept of, 17–18 discipline necessary for, 17 employee role and responsibilities, 18 Templates, 33 3-Sigma, 258 Time management of, 31–32 prioritizing of activities, 32 wasting of, 32 Tolerance repeatability errors, 339–340 reproducibility errors, 340 Tolerance design analysis of variance for, 40 steps for conducting, 40–43 Tool wear analysis, 200–204 Trending processes common variation, 308 description of, 307–308 process capability and, 309–315 special variation, 308–309 Trend of seven points analysis of, 200–204, 227–228 description of, 217–219 True value, 322
U U chart constructing of, 172–175
uses of, 172 Underlying distributions analysis of, 78–81 methods of testing for chi-square goodness-of-fit test, 85–91 distance tests, 91–94 goodness of fit tests, 81 probability plotting, 82 regression tests, 82–85 Shapiro-Wilk W tests, 83–85 Uniformity, 322 u plot point, 290–291 Upper control limit, 103 Upper specification limit, 304, 366–367
V Value engineering, 34 Variable charts control chart, 457 description of, 125–126 long run use of, 268 moving range chart, 145–147 overview of, 213 R chart advantages and disadvantages of, 133 construction of, 133–145 control limits, 134–135 moving range chart vs., 145–146 process control, 136–137 sampling frequency and size, 133–134 vertical scale, 136 short-run description of, 277 nominal MA and MR chart, 287–288 nominal X and MR chart, 284–285 nominal Xbar and R, 278–279 short-run MA and MR chart, 289 short-run X and MR chart, 286–287 short-run Xbar and R, 281–282 short-run Xbar and s chart, 282–283 standardized Xbar and s chart, 283–284 target MA and MR chart, 288 target X and MR chart, 285–286 target Xbar and R, 280–281 specification limits, 275–276 Xbar and s chart, 148, 150 Xbar chart advantages and disadvantages of, 133 construction of, 133–145 control limits, 134–135 description of, 65–66 process control, 136–137, 141 sampling frequency and size, 133–134
SL316X_frame_INDEX_V. IV Page 589 Monday, September 30, 2002 8:19 PM
Index for Volume IV
vertical scale, 136 X chart vs., 145–146 Variable data control charts for, 420 description of, 117, 155, 276 SQC session, 395–397 Variable errors, 330 Variable sampling plan, 197–198 Variation abnormal nature of, 11 assessment of, 11 common control charts for detecting cause of, 106–107 definition of, 100–102 description of, 214–215 trending processes, 308 control charts for detecting causes of common, 106–107 description of, 104 mathematical theorems used, 113–114 measurement system, 317, 319 natural, 99 overview of, 4, 7–11, 99–100 reduction of, 9 repeatability, 323–324, 338 reproducibility, 323–325, 338–339 sources of, 320 special description of, 102–103 in measurement system, 320 signals that indicate cycle of points, 219 description of, 215, 224–225 points beyond control limits, 215–217 run of seven points, 217 runs test for, 221–222 trend of seven points, 217–219 unusual variation, 219–221 trending processes, 308–309 understanding of, 13 unusual, 219–221
589
Weighted number of defects chart, 175–177 Work flow analysis, 25–27
X X and MR chart description of, 145–147 nominal, 284–285 short-run, 286–287 short-run statistical process chart, 284–286 target, 285–286 Xbar chart advantages and disadvantages of, 133 construction of, 133–145 control limits, 134–135 description of, 65–66 process control, 136–137, 141 sampling frequency and size, 133–134 vertical scale, 136 X chart vs., 145–146 Xbar and R chart analysis of variance vs., 211 causes that affect, 216, 218–220 description of, 267 illustration of, 237 modified, 198 nominal, 278–279 short-run, 281–282, 294 short-run statistical process control, 278–279 target, 280–281 trending center line, 308–309, 312–313 Xbar and s chart description of, 148, 150 short run, 282–283 standardized, 283–284
Y Yield, 440–453
W Waste costs of, 6 minimization of, 6
Z Zero defects, 74 Z scores, 75–78, 440–453
SL316X_frame_INDEX_V. IV Page 590 Monday, September 30, 2002 8:19 PM
SL316X_frame_INDEX_V. V Page 591 Wednesday, October 2, 2002 8:25 AM
Index A Acceptance/rejection dilemma, 13 Accumulation analysis, 308 Active type, 360 Ad hoc approach, 117 Adjustment, 358, 360, 394–395, see also Dynamic characteristics Alias patterns, 149, 252, 253 Alternative hypothesis, 87, see also Hypothesis, testing American engineers, 283 Analysis clutch plate rust inhibition, 413–419 data and tolerance design, 392 die-casting experiment design, 404–408 fractional factorial designs, 201–212 k 3 designs, 247–249 Analysis of covariance, 15 Analysis of means (ANOM) estimation error and confidence intervals, 104–112 hypothesis testing, 87 independent samples, 112–114 other tests, 104 sample size considerations, 93–97 sources of variation analysis, 100–103 statistical hypothesis/null hypothesis, 87–93 technique, 98–100 Analysis of variance (ANOVA) annotated minitab, 558 assumptions, 119 background conditions complete randomization, 122 Latin-square design, 128 randomized-block design, 127 clutch plate rust inhibition, 416–417 common designs for experiments, 120–121 data analysis and steps in research process, 299 degrees of freedom and experimental design, 17 die-casting experiment design, 406–407 Duncan’s test, 132–133 Dunnett’s test, 131–132 factorial designs advantages, 147–148
nature, 146–147 statistical software packages, 255–256 homogeneity, 135–137 means effects, 130–131 Newman–Kuels test, 134–135 one-way, 122–125 other designs, 128–129 problems for experimenter, 69 recommendations, 137–142 Taguchi approach decomposition of total sum of squares, 385–388 role of, 381 terms, notations, and development, 381–385 tolerance design, 388–397 k 3 designs, 248 Tukey’s HSD test, 133–134 two-way, 125–127 types, 129 ANOM, see Analysis of means ANOVA, see Analysis of variance Area, 469 Arithmetic, annotated minitab, 554 Arrays, 334, see also Inner array; Orthogonal array; Outer array Assumptions, factorial design, 146 Attribute analysis, 349–352 Average, 43–44 Average distribution, 166
B b Regression weights, 80–82, see also Regression analysis Background conditions, 117, 122, 127, 128 Backward method, 83 Bagging machine, 98 Barlett’s test, 135–136, 137 Benefits, 408–409, 419, see also Clutch plate rust inhibition β Regression weights, 80–82 Bias, 31, 71, 339 Bigger-the-better characteristic, 346 Binary response, 6 Binomial distribution, 448, 449–450
591
SL316X_frame_INDEX_V. V Page 592 Wednesday, October 2, 2002 8:25 AM
592
Six Sigma and Beyond: The Implementation Process
Blocking, 59, 251 Blocking factors, 6–7 Box’s M test, 136, 137 Box-whisker plot, 452–453
C Calculation matrix, 169, 172 Calibration, 359, 360 Cascading effect, 389 Case studies clutch plate rust inhibition, 409–421 die-casting process, 399–409 Catalog, fractional factorial designs, 199–200, 201, 252 Categorical scale, 288 Cause-and-effect diagram, 33, 36 CCD, see Central composite design Cell, 402 Cement, strength, 96–97 Center point, 164, 251–252 Center-point replication, 177 Central composite design (CCD), 250–251, 270, 271 Central Limit Theorem, 53 Central tendency, 444–447 measures, 44–46 Chi-square cumulative distribution function, 468–469 Chi-square distribution, 468–481 Chi-square random variable, 468, 470–474 Chi-square test for goodness-of-fit, 448–450, see also Sampling theory Cluster seed points, 547 Clutch plate rust inhibition additional analysis, 419, 421 analysis of results, 413–419 background and description, 409 experiment design, 411 experiment planning, 409–411 key observations and expected benefits, 419, 420 results of experiments, 411–413 Coded data, 126, see also Analysis of variance Coding, 18–20 Collapsed design, 11 Column/row operations, 554–555, see also Minitab Combined designs, 216, 217–222 Common factor analysis, 537 Communication systems, 358, see also Signal-tonoise ratio (S/N) Competitive economy, 287, see also Taguchi approach
Completely randomized designs (CRD), 10 2 3 Complexity, 3 and 3 designs, 245–246 Components, quality, 390, 391, see also Tolerance design Compounded noise, 343–344, see also Noise factors Comprehensive experimentation, 216–218, 223 Computers, 83–84, 128, 274, 275 Concomitant variable, 259 Conditional statements, 148 Conditions, 294, 295, see also Research process Confidence interval analysis of means, 104–112 analysis of variance definition and Taguchi approach, 381, 384–385 clutch plate rust inhibition, 417, 419 curvature effect in generalized interactive models, 194 die-casting experiment design, 406, 407, 408 location effects and analysis of factorial designs, 176, 177 mean student t-distribution, 486–490 standardized t-random variable, 481–486 Confidence levels, 499–500 Confirmatory factor analysis, 548–549 Confirmatory studies, 89, 90 Confounding, factorial designs blocking in software packages, 251 fractional designs, 148, 149–150 fractional experiments, 195, 196–199, 212, 215 revealing, 228–233 2 3 3 and 3 designs, 246 Conjoint analysis, 540–544 Constraints, 12 Consumer risk, 87 Consumer tolerance, 312, 313 Contour plot, 270, 338, 339 Contrast, 323 Control charts, 88 Control commands, annotated LISREL VIII and structural equation modeling, 548–552 minitab, 552–563 SAS for examining data, 544–547 SPSS, 533–544 Control factors behavior of signal-to-noise ratio, 346, 347 clutch plate rust inhibition, 410 definition, 353 die-casting experiment design, 406 monitoring/recording of defects, 400 parameter design, 341 research process, 293–295, 299–300
SL316X_frame_INDEX_V. V Page 593 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
separation from noise factors and parameter design, 342, 343 three-level orthogonal arrays, 334 Control variables, 219, 224 Cook’s distance, 255 Correction factor, 382 Correlation coefficient, 530–533, see also Least squares method Cost/benefit analysis, 393 Coupled effects, 339 Covariance analysis, 259 CRD, see Completely randomized designs Criteria of evaluation, 399, 400, see also Die-casting process Criterion measure, 292 Cube plots, 256 Cumulative distribution, 454–467 Cumulative probability distribution, 451 Curvature effect, 187, 190, 193–194 Customer complaints, 409 Customer satisfaction, 390 Cycles, 261, 263, 264, see also Evolutionary operation method
D Daniel plot analysis of fractional factorial experiments, 240, 242, 243 statistical software packages for factorial designs, 255 tolerance design, 394, 395, 396 Data advantages of orthogonal arrays, 308 annotated SAS, 544–545 annotated SPSS, 533, 534–535 annotated minitab, 553–554 analysis of existing calculation of squared multiple correlation coefficient, 83 computer programs, 83–84 multiple regression, 79–82 simple regression, 71–76 test for significance, 76–79 variance/covariance, 69–70 attribute analysis, 276–277 collection sample size determination and analysis of means, 95–97 tolerance design, 392 curve fitting and method of least squares, 515 dispersion determination, 447–448 evaluation of sampled, 511 experimental design and analysis, 18–20, 37
593
grouping in cells and measure of central tendency, 445–446 research process, 298–299 Decision error probabilities, 502–504 Decision limits, 98 Defects, 399, 404, 409 Degrees of freedom analysis of variance common experimental designs, 120 definition and Taguchi approach, 382 one-way, 123, 124, 125 two-way, 126, 127 confidence intervals of means, 110, 111, 114 experimental design, 17–18 Latin-square and fractional factorial designs, 152 multilevel arrangements of two-level orthogonal arrays, 332 special case of chi-square cumulative distribution function, 469 test for significance and analysis of existing data, 76 k 3 factorial experiments, 245 three-level orthogonal arrays, 328 Degrees of freedom of error, 382–383 Density, 451 Dependent samples, 113–114 Design 1–5, 59 6, 59–61, 63 7–9, 64 10, 64–65 11–13, 64 14, 64–65 15–16, 65 characterization, 36, 37 common and analysis of variance, 120–121 criteria, 10 format interpretation, 58–65 generators and fractional factorial experiments, 233 response-surface methodology, 267–268, 270 space and three-factor situation, 160, 162 types considerations of experimental designs, 56–65 variation and experimental design, 10–12 Design of experiments (DOE) characterization, 3–4 special topics analysis of attribute data, 276–277 covariance analysis, 259 evolutionary operation, 259–265 randomized incomplete blocks–restriction on experimentation, 277
SL316X_frame_INDEX_V. V Page 594 Wednesday, October 2, 2002 8:25 AM
594
Six Sigma and Beyond: The Implementation Process
response-surface experimentation, 265–267 RSM procedure, 267–274 sequential on-line optimization, 274–276 statistical process control relationship, 285 Design-test-redesign, 309 Dichotomous variables, 19–20 Die-casting process analysis of results, 404–407 experiment design, 399–403 key observations and expected benefits, 408–409 project description, 399 running experiment and collecting results, 404 Differences of proportions, 181–182, see also Factorial designs Direct product design, 342, 344 Discrete random variables, 443–444 Discriminant analysis, 538–539 Dispersion, measures, 41, 44, 45 Distributions/random data, 561 DOE, see Design of experiments Dummy coding, 18, 19 Duncan’s test. 132–133, 134 Dunnett’s test, 131 Dynamic characteristics, 357–364
E Economic loss, 288 Effect coding, 19 Effect estimate, 195, 196 Effect plot fractional factorial experiments, 233, 239–240 statistical software packages for factorial designs, 256–258 three-level orthogonal arrays, 335 Efficiency, 217 Eight-run designs, 199, 200, see also Plackett–Burman designs EMS, see Expected mean square Energy, 359, 364 Environmental variables, 4, 6–7 Erasing, factorial designs, 256 Error, see also Individual entries analysis of existing data, 73, 75 analysis of variance and Taguchi approach, 382, 387 estimation using evolutionary operation, 260 Error state, 353 Estimation error, 104–112 Estimation theory, 491–492 Evolutionary operation (EVOP) sequential on-line optimization, 274
special topics in design of experiments, 259–265 EVOP, see Evolutionary operation Ex post facto experimentation, 54–55 Expectation of sample means, 442 Expected mean square (EMS), 21, 23, 24–26 Expected values, 15–17 Experiment clutch plate rust inhibition, 409–413 conduct of, 160–167 steps in research process, 298 Experimental design anatomy of an experiment, 4–7 choice, 36–37 coding and data analysis, 18–20 degrees of freedom, 17–18 design types, 56–65 EMS rules, 24–26 experimental error, 14–17 fixed, random, and mixed models, 20–24 fundamental concepts, 3–4 interaction, 20 measures of central tendency, 44–46 measures of location, 43–44 principles of conduct, 7–8 shape of distribution, 46–53 statistical fundamentals, 41–43 structure and form, 54–55 validity of experimentation, 55–56 variation, 8–14 Experimental error center points in statistical software packages for factorial designs, 251–252 common experimental designs and analysis of variance, 120–121 estimation and analysis of factorial designs, 174–180 experimental design, 4, 7, 8, 14–17 factorial design, 146 replications in conduct of experiments, 164 single-factor models, 185 Experimental layouts, streamlining, 331 Experimentation strategies, 216–225, see also Fractional factorial experiments Exploratory data analysis, 560–561 Exploratory studies, 89, 90 Exponential distribution plots, 460–462
F F test, 13, 19, 21, 76, see also Analysis of variance Factor average effects clutch plate rust inhibition, 414, 417
SL316X_frame_INDEX_V. V Page 595 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
die-casting experiment design, 404, 405, 406 experimental design, 5, 31, 33–35 factorial design, 145, see also Variables fractional, 149 grouping and steps in research process, 293–294 parameter design, 347 Factorial analysis of variance, 19 Factorial chi square, 277 Factorial designs analysis of variance advantages of 147–148 4 X 4, 120 nature of, 146–147 assumptions, 146 experiment model, 145–146 fractional, 148–153 key items in software packages, 251–258 two-/three-dimensional and experimental design, 10–11 Failure mode analysis (FMA), 310 Failure mode and effect analysis (FMEA), 310, 355 Fixed models, 20–24 Floor management, 298 Flow chart, 37, 38 FMA, see Failure mode analysis FMEA, see Failure mode and effect analysis Folded-over design, 216 Foldover process, 199 Forms design/analysis 2 2 factorial, 565 3 2 factorial, 566 4 2 factorial, 567–568 5 2 factorial, 569–570 considerations for experimental designs, 54–55 normal probability paper, 579 Plackett–Burman design/analysis 8-run, 571 12-run, 572–573 16-run, 574–575 20-run, 576–577 2 3 Yates’ algorithm for 2 /2 factorials, 578 Forward method, 83 Fractional factorial designs, 252, 253 Fractional factorial experiments analysis, 201–212 catalog, 199–200, 201 combining designs, 216 confounding and resolution, 196–199 Daniel plot, 240–243 effects plot, 239–240
595
eight-run Plackett–Burman designs, 213–216 example using orthogonal array analysis, 239 missing data, 225–228 normal plot, 240 randomization, replication, and repetition, 200 revealing the confounding, 228–233 selected preferred designs, 233–238 strategies of experimentation, 216–225 two-level screening designs, 212–213 F-ratio, see Analysis of variance Frequency curves, 42, 43, 44 Frequency data, 446–449 Frequency distributions, 42, 43 Full factorial design, 148, 149, 252 Full factorial experiments k analysis of 2 factorials, 167–172 conduct of experiments, 160–167 graphical aids for analysis, 174 importance differences of proportions, 181–182 location effects, 174–180 variance effects, 180–181 one-factor situation, 157–158 running, 172–173 two-, three-factor situations, and generalized k 2 designs, 160 two-level, 158–160 Functional variation, 8 Functionality, 8
G Geometric mean, 132 Goal post approach, 9, 288 Goodness-of-fit, 448–450, 474–481 Grand mean effect, 14 Graphical aides, 174 Graphical analysis, 254–255 Graphical assessment, 180 Greco–Latin square design, 128, 152–153 Guidelines, 227–228, see also Fractional factorial experiments
H Half-normal plot, see Daniel plot Harmonic mean, 132 Hartley’s Fmax test, 136, 137 Higher-order interactions, 178, see also Interactions Higher-resolution designs, 233 Histogram/dot diagram, 192 Histograms, 42, 43
SL316X_frame_INDEX_V. V Page 596 Wednesday, October 2, 2002 8:25 AM
596
Six Sigma and Beyond: The Implementation Process
Homogeneity, analysis of variance, 135–137 Honestly significant difference (HSD), 133–134 Hot mill reheat furnace, 365–373 HSD, see Honestly significant difference Hypersquares, 153 Hypothesis Duncan’s test and analysis of variance, 133 stating experimental and steps in research process, 293 testing, 499–514 analysis of means, 87–93 curvature effect, 194
I Ideal function dynamic characteristics, 357–364 parameter design, 364–374 robustness, 352–357 II, see Improvement index Improvement index (II), 394 Improvement ratio (IR), 393–394 Incomplete design, 11 Incomplete factorials, see Fractional factorial experiments Independent samples, 112–114 Industrial experimentation, 301–303 Information, annotated minitab, 553 Information-gathering methods, 291–292 Inner array, 297–298, 342 Interaction avoiding and reproducibility of experiments, 359 clutch plate rust inhibition, 414, 417 control/noise factors and parameter design, 341, 343 die-casting experiment design, 399 experimental design, 20 factorial design, 145 two-factor and analysis of fractional, 212 Taguchi approach, 286–287 3 3 designs, 246 three-level orthogonal arrays, 329, 330–331 variables and advantages of factorial analysis of variance, 148 Interaction effects, 168–172 Interaction plots, 256–258 Interactive models generalized, 189–194 two-factor model, 188, 189 Invalidity, sources for designs, 60, 61, 62 IR, see Improvement ratio
J Japanese engineers, 283
K Kano model, 364
L L4 design, 399, 402, 403, 423 L8 design, 296–297, 411, 424 L9 design, 328, 337–338, 339, 435 L12 design, 297, 399, 424 L16 design, 425, 426 L18 design, 339, 435 L27 design, 339, 436, 437, 438 L32 design, 427–430 L36 design, 439–440 L81 design, 431–434 Larger-the-better, 358, see also Dynamic characteristics Latin-square design, 120, 128, 150–152 Least square methods, 71, 73, 515–532 Lettuce, 462 Level of significance, 88, see also Significance Levels factorial design, 145 experimental design, 31, 33–35 tolerance design, 392 variables and response-surface methodology, 268 Levene’s test, 136, 137, 299 Linear combination of effects, 196, 198, 201 Linear equations, 323 Linear graphs, 329–332 Linearity, 360 LISREL VIII, annotated, 548–552 Location effects, 174–180 measures, 41, 43–44 Logic of hypothesis testing, 12–14, see also Hypothesis, testing Logic transformations, 254 Logistic regression analysis, 539 Loss function clutch plate rust inhibition, 421 concept and experimental design, 9–10 orthogonal arrays calculation and advantages, 312–314 quality characteristics and advantages, 316–319 signal-to-noise ratio relationship and parameter design, 342
SL316X_frame_INDEX_V. V Page 597 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
Taguchi approach, 287, 381 tolerance design, 388, 391 Low-/high-rank characteristics, 388, see also Tolerance design
M Main effects analysis of fractional factorial designs, 212 die-casting experiment design, 404, 405, 406 eight-run screening designs, 215 k estimation and analysis of 2 factorial designs, 167–168 folded-over design, 216 generalized interactive models, 189, 193 Main-effects means, 130 Manufacturing, 288, see also Taguchi approach Margin of error, 494–495 Market share, 287 Matrices, 561–562 Mean analysis, 352 Mean analysis parameter design, 349 Mean estimate, 494–495 Mean square deviation (MSD) calculation advantages of orthogonal arrays, 315, 316 variation, 361–362 Taguchi approach, 287, 381 Mean square analysis of variance one-way, 123, 124, 125 Taguchi approach, 384 two-way, 126, 127 Latin-square and fractional factorial designs, 152 Mean square error (MSE), 361–362 Mean student t-distribution, 486–490 Means effects, 130–131 Means tests, 104 Measured data, 30 Measurement capability, 30 Measurement error, 324–325, 358–359 Measuring system, calibration, 360 Median, 44, 45 Method of least squares, see Least square method Minimum significant factor effect (MSFE), 177, 181, 182 Minitab, annotated, 552–563 Mirror image design, see Reflected design Missing data, 225–228 Mixed models, 20–24 Model building generalized interactive models, 189–194 single-factor model, 185–188
597
two-factor models, 188–189 Model checking, 190–191 MSD, see Mean square deviation MSE, see Mean square error MSFE, see Minimum significant factor effect Multilevel arrangements, 332–333 Multiple correlation coefficient, 83 Multiple regression analysis annotated SAS, 545–546 annotated SPSS, 537–539 characterization, 69 existing data, 79–82 Multiple t test, 132 Multivariate analysis, 546–547, 558
N Nested design, 11, 12 Nested treatment variable, 11, 12 Newman–Keuls test, 134–135 NID, see Normally and independently distributed Noise factors advantages of orthogonal arrays, 309–310 clutch plate rust inhibition, 410–411 definition, 353 die-casting experiment design, 399, 402, 403, 404, 406 defects, 400 management and robustness, 356 parameter design, 341, 342–345 types and steps in research process, 294 Noise matrix, 219–220, 222–224 Nominal-the-best, 346, 358, see also Dynamic characteristics Nondynamic characteristics, 358, see also Dynamic characteristics Nonlinear curves, 159 Nonparametrics, 559 Normal curve, 48, 49 Normal distributions considerations of experimental designs, 46, 47, 48–50, 51, 52 probability plot, 455 relationship to ordinary graph and N probability paper, 456 Normal plot, 240, 241, 243, 255 Normal probability paper, 579 Normal probability plots, 192 Normally and independently distributed (NID), 21 Notation, annotated minitab, 552 Nuisance factor contamination, 6, 7 Null hypothesis, see also Hypothesis, testing analysis of means, 87–93 detection of false and experimental error, 15
SL316X_frame_INDEX_V. V Page 598 Wednesday, October 2, 2002 8:25 AM
598
Six Sigma and Beyond: The Implementation Process
variation and experimental design, 12–13 Number of repetitions, 383 Number of replications, 164 Number of trials, 383 Numbers, annotated minitab, 553
O OA, see Orthogonal array OEC, see Overall evaluation criterion Off-line characteristics, 289 Off-line experimentation, 261 Omega transformation, 349, 351 One-factor situation, 157–158 One-factor-at-a-time method, 326, 327 One-sided tests, see One-tailed tests One-tailed tests, 93, 111, 511 One-variable-at-a-time, 307 One-way ANOVA, 122–125, 138, see also Analysis of variance Online characteristics, 289 Operational definitions, 30–31 Optimum design, 97 Optimum equation, 336 Ordered categorical ratings, 6 Orthogonal array (OA) advantages, 307–319 analysis and graphical techniques, 239 design of experiment, 234–238 die-casting experiment design, 399, 400 L8 design, 333, 424, 431–434 linear graphs 3 2 layout, 321–322 L4 design, 423 L9 design, 435 L12 design, 424 L16 design, 425, 426 L18 design, 435 L27 design, 436, 437, 438 L32 design, 427–430 L36 design, 439–440 three-level, 328–339 orthogonality definition, 322–328 research process, 295, 296–297 Taguchi approach, 285, 381 tolerance design, 392 Orthogonal coding, 18, 19 Orthogonal contrasts, 130 Orthogonal fractional factorial design, 540 Orthogonality, 322–328 Outcome, 4, 292–293 Outer array, 297–298, 342 Out-of-control condition, 88 Output characteristic, 359–360
Overall evaluation criterion (OEC), 411, 412, 413, 417–418, 419
P Pairwise contrasts, 134, 135, see also Newman–Keuls test Paper helicopter, 374, 375–378 Parameter design (PD) behavior of signal-to-noise ratio, 345–352 best conditions and tolerance design, 391 countermeasures to reduce noise factors effects, 310 dynamic example, 374 ideal function, 364–365 dynamic characteristics, 357–364 robustness, 352–357 overview, 341–345 philosophical issues of Taguchi approach, 290 static example, 365–373 Parameter design (PD) phase, 388–389 Parameter determination, 190 Parameters, research process, 301 Pareto Diagram concept, 34, 35 Partial factorial experiment, 381, see also Factorial designs Partial replication, 164 Passive type, 360 Path of steepest ascent (PSA) method, 266, 267–270 PD, see Parameter design P-diagram, 353–355, 357 Pearson product moment coefficient, 75 Percent contribution, 383 Percent defective, 351 Percentiles, 451–452 Performance consistency/evaluation and Taguchi approach, 286 clutch plate rust inhibition, 418, 419 die-casting experiment design, 408 signal-to-noise ratio effect and parameter design, 341 Phase, evolutionary operation method, 261, 263, 264 Philosophical issues, Taguchi approach, 287–290 Pie diagram, 406, 407, 418 Pitfalls, response-surface methodology, 271–273 Plackett–Burman designs 8-run design/analysis, 213–216 forms, 571 12-run design/analysis forms, 572–573 16-run design/analysis forms, 574–575 20-run design/analysis, forms, 576–577
SL316X_frame_INDEX_V. V Page 599 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
statistical software packages for factorial designs, 253 Planning/managing experiment process act, 32–33 do, 31 getting started, 33–39 plan, 29–31 study, 32 Plots/histograms, 555–556 Point calibration, 360 Points of data, 95 Pooling, 385 Population, 42, 441–443 Power curve, 514 Precision advantages of factorial analysis of variance, 147–148 confidence intervals concept in analysis of means, 105, 106 estimate and analysis of fractional factorial designs, 212 experiment and steps in research process, 301 repetition in conduct of experiments, 165 Prediction, 336 interval, 300–301 Pre-experimental designs, 56 Principal components analysis, 535, 536 Principles of conduct, 7–8 Probability displays, 451, see also Probability plots Probability plots characteristics, 453 normal distributions, 455 normal, 457–459 probability displays and plots, 451 statistical software packages for factorial designs, 255 Problems, 33, 390 Process control, 288 Process function, 309 Process outputs loop, 260 Product advantages of orthogonal arrays, 308–315 robustness and Taguchi approach, 289 Productivity, 260, see also Evolutionary operation Propellant grains, 118–119 Proportion of the area under the curve, 49, 50, 51 PSA, see Path of steepest ascent method Pure sum of squares, 384, see also Sum of squares
Q QFD, see Quality function deployment QLF, see Quality loss function
599
QT4, see Qualitek-4 Qualitative response, 5–6 Qualitek-4 (QT4), 401, 404, 408, 412 Quality advantages of orthogonal arrays characteristics and loss function, 316–319 countermeasures to reduce noise factors effects, 309–310 product design, 308–315 issues and Taguchi approach, 286, 287–288 three-level orthogonal arrays, 334 Quality engineering, 287–290, see also Taguchi approach Quality function deployment (QFD), 355, 364 Quality loss, 397 Quality loss function (QLF), 310–312, 315, see also Loss function Quantitative response, 5 Quasi-experimental designs, 57–58, 60–63
R 2
r , 76 Random models, 20–24, 145 Random samples, 42 Random variable, 449 Randomization complete for background conditions and analysis of variance, 122 conduct of experiments, 162–163 experimental design, 7–8 fractional factorial experiments, 200 2 3 3 and 3 designs, 246 Randomized-block design, 127 Randomized incomplete blocks, 277 Reflected design, 216, 217 Regression, annotated minitab, 557–558 Regression analysis, 69 Regression coefficient, 77–79 Regression line, 522–530 Regression weights, 80–82 Reinforcement, 147 Repetition conduct of experiments, 165–167 experimental design, 8 fractional factorial experiments, 200 planning/managing the process of experiment, 30 strategies dealing with noise and parameter design, 344–345 sources of measurement error and dynamic characteristics, 358, 359, 361, 362 Replication analysis of fractional factorial designs, 212
SL316X_frame_INDEX_V. V Page 600 Wednesday, October 2, 2002 8:25 AM
600
Six Sigma and Beyond: The Implementation Process
importance of location effects, 175–177 analysis of variance, 118, 120 conduct of experiments, 163–165 experimental design, 7, 8 fractional factorial experiments, 200 Reproducibility, 358 Research process, steps, 290–301 Research question, 291–292 Residual errors, 191–193, 249, see also Error Residual plotting, 256 Residuals, 79 Resolution, 195, 196–199 Resolution III designs, 199, 213 Response choice and planning/managing the process of experiment, 29–30 data and conduct of experiments 164 experimental design, 4, 5–6, 8 factorial design, 145 ideal function and parameter design, 364 identification and tolerance design, 392 k maximum/minimum in 3 designs, 247, 248 two-level factorial designs, 160 Response surface, 160 experimentation, 265–267 methodology, 267–274 Response variable, 35 Rising-ridge system, 247 Risk, 87, 108 Robust designs, 358 Robust system, 356–357 Robustness definition, 353 ideal function, 352–357 parameter designs, 283 process and tolerance design, 390 reasons for using orthogonal arrays, 327 ROT, see Rule of thumb Rule of thumb (ROT), 493–494 Run analysis of factorial designs, 172–173 die-casting experiment design, 404 experimental design, 4, 7, 8 planning/managing the process of experiment, 31 k 3 designs, 246 k 2 factorial designs, 195, 196 Rust spots, 409, 410
S S/N ratio, see Signal-to-noise ratio Saddle point, 247, 248 Sample size
analysis of means, 93–97 level of significance in statistical/null hypotheses, 88, 92 calculation for each run and steps in research process, 297 confidence intervals of means with unknown variance, 110 Duncan’s test and analysis of variance, 133 establishing, 509–510 estimation of population mean, 491–492 experimental design, 8 logic of hypothesis testing and experimental design, 13–14 Samples, 42 Sampling theory chi-squared test for goodness-of-fit, 448–450 dispersion of data, 447–448 inferring statistics of entire population from a smaller sample, 441 mean population, 441–442 measures of central tendency, 444–447 variance and standard deviation, 442–444 SAS, annotated, 544–547 Saturated array, 325, see also Inner array; Orthogonal array; Outer array Screening, 212–213 Second-order ridges, 272 Second-order surfaces, 272 Sensitivity, 360 analysis and parameter design, 347–348, 349 Sequential experimentation, 216–218, 223 Sequential on-line optimization, 274–276 Severity index, 414, 415 Shape of distribution, 46–53 Shelf life, 462 Signal, parameter design, 364 Signal factor, 353 Signal-to-noise (S/N) ratio advantages of orthogonal arrays, 308 calculation, 360–362 clutch plate rust inhibition, 419, 420 communication systems, 358 die-casting experiment design/analysis, 409 maximizing and variability reduction, 345 mean analysis comparison, 352 parameter design, 341–342 behavior, 345–352 relationship to requirements of dynamic characteristics, 360 strategies of experimentation, 224–225 Taguchi approach, 287 Significance Newman–Kuels test and analysis of variance, 134, 135 reasons for using orthogonal arrays, 327
SL316X_frame_INDEX_V. V Page 601 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
testing and analysis of existing data, 76–79 two-way ANOVA, 127 Simple regression characterization, 69 determination and analysis of existing data, 71–76 Simplex, 274 Simplified method, 226 Single-factor models fixed, random, and mixed models and experimental design, 22, 23 building, 185–188 Skew distributions, 47, 48 Slope calibration, 360, 361 Slurry, 25–26 Small sampling theory, 481–482, see also Sampling theory Smaller-the-better clutch plate rust inhibition, 413, 414 die-casting experiment design, 404 dynamic characteristics, 358 signal-to-noise ratio behavior and parameter design, 345–346 Smoke detectors, decision maker for fire, 504–514 Solomon design, 59 Sony, 286 Sorting, annotated minitab, 561 Sources of variation analysis (SVA), 100–103 SPC, see Statistical process control Split-sample, 536, 538 SPSS, control commands for annotated, 533–544 Stability, 358 Stack-up approach, 390 Standard deviation calculation and analysis of factorial designs, 179 considerations of experimental designs, 46, 49 sampling theory, 442–444 strategies and considerations, 497 Standard error of estimate, 77–78 Standard error of the mean, 53, 105–106, 134–135 Standard errors, 120, 176, 255 Standard form, 151 Standardized normal curve, 50, 52 Standardized normal distribution, 50–51 Standardized t-random variable, 482–486 Static group comparison design, 59 Static systems, 345 Statistical analysis, 17, see also Degrees of freedom Statistical design of experiments, 3 Statistical fundamentals, 41–43 Statistical hypothesis, 87–93 Statistical process control (SPC), 285
601
Statistical significance, 393, see also Significance; Tolerance design Statistical test, 13, 500–502 Statistically designed experiments, 290 Statistics, basic and annotated minitab, 556–557 Step sizes, 269 Stepwise method, 83–84 Sticky parts, 409, 410 Stimuli design, 540–544 Stored commands/loops, annotated minitab, 563 Strongest noise, 344, see also Noise factors Structural equation modeling, 548–552 Structure, experimental designs, 54–55 Student T-distribution, 481–498 Student’s t test, 109 Sturges’ rule, 445 Sum of cross products, 70 Sum of squares analysis of existing data, 70, 73–75, 76 analysis of variance definition and Taguchi approach, 383–384 one-way, 123, 124, 125 two-way, 126, 127 decomposition of total, 385–388 fractional factorial designs, 152 statistical software packages for factorial designs, 255 three-level orthogonal arrays, 328 k 3 designs, 247, 249 Summer, defects, 410 SVA, see Sources of variation analysis Synonyms, 4–5 System design, 290, 310, 355 System design phase, 388
T t Test, 59 Tables, annotated minitab, 559–560 Taguchi approach, see also Analysis of variance (ANOVA), Taguchi’s approach industrial experimentation comparison of typical states, 301–303 robustness, 355 parameter design characterization, 286–290 comparison between typical stages, 301–303 research process, 290–301 Taguchi L8, 213, 214, see also L8 designs Taguchi L12, 236, see also L12 designs Taylor series, 311 Team, 392, see also Tolerance design Team approach, 33, 34
SL316X_frame_INDEX_V. V Page 602 Wednesday, October 2, 2002 8:25 AM
602
Six Sigma and Beyond: The Implementation Process
Technology development, 362, see also Dynamic characteristics Temperature, 160 Test combinations, 247 Test statistic, 91, 92 Tests of means, 130 TGR, see Things gone right TGW, see Things gone wrong Things gone right (TGR), 32 Things gone wrong (TGW), 32 2 3 Design, 270, 271 k 3 Designs, 246–247 Three-factor interaction, 174 Three-factor situation, 160, 162 Three-level designs central composite design, 250–251 key items in factorial designs, 251–258 response-surface methodology, 269–270 k 3 designs, 246–249 k 3 factorial experiments, 245–246 Three-level experiment, 392 Three-level orthogonal arrays discussion, 333–339 4 L9(3 ) orthogonal array, 328 linear graphs, 329–332 multilevel arrangements in two-level series orthogonal arrays, 332–333 Three-way ANOVA, 140, see also Analysis of variance Time order tests, 192 Time series, annotated minitab, 560 Time series experiment, 63 Tolerance measuring system and quality engineering, 359 multilevel and advantages of orthogonal arrays, 312–315 system design, 283–284 Tolerance adjustment, see Adjustment Tolerance design characterization and process, 388–397 countermeasures to reduce noise factors effects on functional quality, 310 philosophical issues of Taguchi approach, 290 Tolerance design phase, 389 Tolerance factors, 392 Tolerancing, 389–390 Top-down architecture, 390 Total degrees of freedom, 383, see Degrees of freedom Total of results, 383 Toyota, 286 Transformations, 254, 314
Treatment combinations, 246 Treatment dimensions, 150–151, see also Fractional factorial designs Treatment effect, 21 Trial conditions, 401, 402, 404 True experimental designs, 56–57 t-Test, 13 Tukey’s HSD test, 133–134 t-value, 255 2 2 Factorial design/analysis, 565 3 2 Factorial design/analysis, 566 3 2 Layout, 321–322 4 2 Factorial design/analysis, 567–568 5 2 Factorial design/analysis, 569–570 k 2 Factorial design confounding, 229 analysis, 167–172 curvature checking and generalized interactive models, 193–194 generalized, 160, 163 judging importance of location effects, 179 k–p 2 Equivalent design, 213, 214 Two-factor model, 23, 24, 145–146, 188–189 Two-level factorial design, 158–160, 161, 261, see also Factorial design Two-level factors, 399, 400, 415 Two-sided tests, see Two-tailed tests Two-tailed tests, 93, 111, 513 Two-way ANOVA, 125–127, 139, 141–142, see also Analysis of variance Type I error, see also Error decision probabilities, 502 Duncan’s test and analysis of variance, 132–133 logic of hypothesis testing and experimental design, 13 Newman–Kuels test and analysis of variance, 135 statistical/null hypotheses and analysis of means, 87, 88–89 Type I model, 119 Type I problems, 129 Type II error, 13–14, 87, 89, 502, see also Errors Type II model, 119 Type II problems, 129
U Uncertainty, 226 Uniform distribution plots, 459–460 Univariate analysis, 69 Unreplicated experiments, 177–179
SL316X_frame_INDEX_V. V Page 603 Wednesday, October 2, 2002 8:25 AM
Index for Volume V
V Validity, 55–56, 297 Variability calculation, 361–362 quality engineering, 359 requirement of dynamic characteristics, 360 shrinking and parameter design, 345, 346 Variables central composites designs, 250 choice and response-surface methodology, 267 control and advantages of factorial analysis of variance, 147 dependent identification and steps in research process, 291 experimental designs, 4, 41–42 Variance clutch plate rust inhibition, 416 confidence intervals in analysis of means known, 105–108 unknown, 108–112 covariance and analysis of existing data, 69–70 experimental designs, 46, 47 goodness-of-fit test, 479–481 multivariate analysis and annotated SAS, 546–547 research process, 292, 293 sampling theory, 442–444 Variance effects, 180–181 Variance ratio (F), 382 Variation, experimental design, 8–14, 41–42 Variation reduction plot, 408–409, 419, 420 VARIMAX rotation, 535, 536
603
Verification, 299–300 experiments, 300–301
W w Procedure, see Tukey’s HSD test Weibull distribution probability plot, 462–463, 465, see also Probability plot Weibull linearizing cumulative distribution, 463–464, see also Cumulative distribution Weibull parameter estimation, 464–465 Weibull probability paper, 467 Weibull probability plot, 466, see also Probability plot Weighing problem, 324–325 Weighted mean, 131–132 Written procedure, 298
X Xbar, 52–53
Y Yates’ algorithm, 247, 249, 278 Youden squares, 277
Z Zero-point, 360
SL316X_frame_INDEX_V. V Page 604 Wednesday, October 2, 2002 8:25 AM
SL316X_frame_INDEX_V. VI Page 605 Monday, September 30, 2002 8:17 PM
Index* A Abstracting and indexing services, 145 Accelerated degradation testing (ADT), 336 Accelerated depreciation, 687 Accelerated life testing (ALT), 336, 362 Accelerated stress test (AST), 310–311 Accelerated testing, 305 ADT (accelerated degradation testing), 336 ALT (accelerated life testing), 336, 362 AST (accelerated stress test), 310–311 constant-stress testing, 305–306 definition of, 362 HALT (highly accelerated life test), 310 HASS (highly accelerated stress screens), 310 methods, 305–306 models, 306–309 PASS (production accelerated stress screen), 311–312 progressive-stress testing, 306 step-stress testing, 306 Acceleration factor (A), 308–309 Acclaro (software), 545–547 Accountants, 663 clean opinions of, 671 reports of, 671–672 Accounting accrual basis of, 676–677 books of account in, 675–676 in business assessments, 138 cash basis of, 677 and depreciation, 684 earliest evidence of, 672 entries in, 675–676 financial reports in, 664 and financial statement analysis, 688 as measure of quality cost, 492 recording business transactions in, 672–675 roles in business, 664 valuation methods in, 679–681 Accounts books of, 675–676 contra, 684 types of, 674 Accounts receivables, 681, 691
Accrual accounting, 676–677 Accrued pension liabilities, 667 Accumulated depreciation, 665–666, 684 Achieved availability, 292 Acquisitions, in product design, 196 Action plans, 161–162 based on facts and data, 107 creative planning process in, 162 documenting, 162 in FMEA (failure modes and effects analysis), 253–258 monitoring and controlling, 162–163 prioritizing, 162 Action standards, see standards Activation energy type constant (Ea), 308 Active repair time, 293 Activities in benchmarking after visits to partners, 156 defining, 150 drivers of, 151 flowcharting, 153–154 modeling, 152–153 output, 151 performance measure, 151–152 resource requirements, 151 triggering events, 150 during visits to partners, 155 Activity analysis, 150–152 Activity benchmarking, 123 Activity drivers, 151 Activity performance measure, 151–152 Actual costs, 478–480, 568, 703 Actual operating hours, 525 Actual size, 522 Actual usage, amount of, 525 Administrative process cost of, 570 improving, 490–492 as measure of quality cost, 493 ADT (accelerated degradation testing), 336 Advanced product quality planning, see APQP Advanced quality planning, see AQP Advanced Systems and Designs Inc., 405 Aerospace industry, 226 Aesthetics, 113
* Note: Italicized numbers refer to illustrations and tables
605
SL316X_frame_INDEX_V. VI Page 606 Monday, September 30, 2002 8:17 PM
606
Six Sigma and Beyond: The Implementation Process
Aggressors, in team systems, 25 Aircraft, 196 Airline industry, 702 Allowance, 568 Almanacs, 145 Alpha tests, 266 ALT (accelerated life testing), 336, 362 Alternative costs, 703 Alternative lists, in trade-off studies, 472 Alternative rank, 476 Altman, Edward I., 699 Altshuller, Genrich, 549 Aluminum, 531 Amateur errors, 211 American National Standards Institute (ANSI), 61 Amortization, 669 Analysis of variance, see ANOVA Angular dimensions, 520 Angular measurements, 527 Annual reports, 146, 671–672, 678 ANOVA (analysis of variance), 396, 407–410 for cumulative frequency, 423 for NTB signal-to-noise ratios, 431–432 for raw data, 430, 434, 436 signal-to-noise (S/N) ratio as raw data, 434, 437 for transformed data, 438 typical table setup, 432 ANOVA-TM computer program, 405 ANSI (American National Standards Institute), 61 ANSYS program, 183–185 Antifreeze, 289 Apollo program, 226 Apple Computer, 195 Appraisal costs, 101, 482, 489 APQP (advanced product quality planning), 266 vs. AQP (advanced quality planning), 43 in DFSS (design for six sigma), 45–47 and product reliability, 298 AQP (advanced quality planning), 40–42 vs. APQP (advanced product quality planning), 43 basic requirements for, 42 demonstrating, 42 pitfalls in, 43–44 qualitative methodology in, 44–45 reasons for using, 42 workable plans for, 43 Archiving, 40 Area sensors, 218 ARIZ algorithm, 549 Arrhenius model, 308–309 Assembly lines, 207 simulation of, 170–175 two-station, 173–174
Assembly mistakes, 213 Assembly omissions, 214 Assembly process, 206 Assessment items, 476 Assets, 679 in balance sheet equation, 664, 674 buying, 666 contra, 684 current, 665 current value of, 680 vs. expenses, 680–681 financial, 681 in financial statements, 679 fixed, 665–666 historical cost of, 679 inflation effect on, 679 intrinsic value of, 680 as investments, 680 liquidation value of, 679 noncurrent, 667 physical, 681–682 psychic value of, 680 replacement cost, 680 return on assets (ROA), 694 return on assets managed (ROAM), 695 return on gross assets (ROGA), 695 return on net assets (RONA), 695 selling, 669 slow, 670 types of, 681–682 valuation methods, 679–681 values based on historical costs, 678–679 Assets/equity ratio, 692 Asset turnover, 704–705 AST (accelerated stress test), 310–311 AT&T, benchmarking in, 122–123 Attributes, 116 Attributes tests, 313–314, 423 Auditing, 597 Automation, 682 Automobile industry, 54–5 AQP (advanced quality planning) in, 41 commonly used elements in, 176 product reliability in, 296–297 six sigma philosophy in, 2 Automobile parts industry, 54 Availability, 292, 356 Axiomatic design applying to cars, 543–544 axioms in, 542 benefits of, 545–547, 545–547 and change management, 545 changing existing designs with, 544–545 creating new designs with, 544 diagnosing existing designs with, 544
SL316X_frame_INDEX_V. VI Page 607 Monday, September 30, 2002 8:17 PM
Index for Volume VI and project workflow, 545 vs. robust design, 543 Axiomatic designs, 541, 715 Axioms, 542
B Balance sheets, 664–665 accrual accounting in, 678 in annual reports, 671 changes in working capital items in, 670 current assets and liabilities, 665 current liabilities, 665–666 earning per share, 669 equation, 664, 673–675 fixed assets, 665–666 footnotes in, 670 gross profit, 668 income statements, 667–668 noncurrent assets, 667 noncurrent liabilities, 667 ratio analysis of, 689–690 shareholder's equity, 667 slow assets, 666 sources of funds, 669 statement of changes in, 669 in summary of normal debit/credit balances, 674 use of funds, 670 working capital format in, 666 Bank filings, 147 Bankruptcy, 679 Barriers to market, 129 Barter, 53 Basic functions, 557, 574–575 Basic manufacturing process, 206 Basic needs, 229 Basic quality, 68–69, 70 Bathtub curve, 293 Beams, 176 Beam sensors, 218 Behavioral theory, 663–664 Beliefs, and change management, 127 Benchmarking, 97 alternatives, financial analysis of, 163–164 alternatives, identifying, 129–132 alternatives, prioritizing, 139–142 areas of application of, 97–99 and business strategy development, 99–102 and change management, 126–129 classical approach to, 102–103 common mistakes in, 166 continuum process, 98 and Deming management method, 110–111
607 and Deming wheel, 111–112 in design FMEA, 267–268 in DFSS (design for six sigma), 717 financial, 157 in FMEA (failure modes and effects analysis), 230 gaining cooperation of partners in, 148 generic, 122 history of, 97 identifying candidates for, 129–134 identifying cause of problems with, 134 in least cost strategy, 100–101 making contacts for, 149 as a management tool, 119–120 managing for performance, 164–166 and national quality award winners, 107–110 operations process, 123 and organizational change, 126–129 organizations for, 123–124 performance and process analysis, 149–158 preparing proposals for, 149 activities before visiting partners, 149 understanding own operations, 149 activity analysis, 150–152 activity modeling, 152–153 flowcharting process, 152–153 activities during visit, 155 understanding partners' activities, 155 identifying success factors, 155–156 activities after visit, 156 activities of partners, 155–156 in process FMEA, 275–276 project evaluations, 165 resistance to, 127 scopes of, 120–121 and Shewart cycle, 111–112 and six sigma, 105–107 sources, 142–149 and SQM (strategic quality management), 102–105 success factors in, 124–126, 164–166 technical competitive, 78 ten-step process in, 121–122 types of, 122–123 Benefit-cost analysis, 610 Beta tests, 266 Bibliographies, 145 Bilateral tolerance, 523 Binomial distribution in fixed-sample tests, 315–316 in sequential tests, 317–318 Biographical sources, 145 Black Belts, in dealing with projects, 661 Bladed wheel hopper feeders, 207
SL316X_frame_INDEX_V. VI Page 608 Monday, September 30, 2002 8:17 PM
608
Six Sigma and Beyond: The Implementation Process
Blast-create-define method, 582–584 Block diagrams, 234, 323–325 Blockers, in team systems, 25 Boeing Co., 169, 196 Bolted joints, 181 Boltzmann’s constant (Kb), 308 Bond rating companies, 695–696 Bonds, 695–696 Bonds payable, 667 Bookkeeping, 672 Bookshelf data, 349 Books of account, 675–676 Book value, 666, 687 Boothroyd, Geoffrey, 202–203 Boundaries, in teams, 29 Boundary diagrams, 258–260 Box, George, 404, 429 Brainstorming, 230, 267–268 and concept of functives, 57 in creative phase of job plans, 582 in design FMEA, 267–268 in determining causes of failures, 247 in developing alternatives to functions, 558–559 in planning DOE (design of experiments), 372–373 in process FMEA, 275–276 in value control approach, 582 Branch transmissions, 535 Brand names, 89–94 Breakdowns, 278 Breakeven analysis, 705–706 Breakthrough strategies, 160 Buckling, 176, 179 Budgets, 711–712 calculating, 604 departmental, 711–712 managing, 712 and satisfaction of management, 662 zero-based, 712 zero-growth, 712 Burden, 568–569 Business assessment forms, 135–139 Business assessments, 133 Business assets, see assets BusinessLine, 146 Business meetings, 19 Business reviews, 145 Business strategy, and benchmarking, 99–100 Business transactions, recording, 672, 675 BusinessWire, 146 Buyer/supplier relationship, see customer/supplier relationship Buying groups, 148
C Cadillac, 107–108 Calendar elapsed time, 525 Calibration, 526 Calipers, 527 Capacitive tests, 530 Capital investments, 661 Capital surplus, 670 Carlzon, Jan, 125–126 Case studies, 148 Cash, 681 in annual reports, 671 in business transactions, 675 vs. profits, 678 ratio analysis of, 691–692 recording, 675 sources of, 669, 673 uses of, 673 Cash basis, of accounting, 677 Cash flow, 700 calculating, 701 and change management, 127 and current assets and liabilities, 702 definition of, 700 depreciation as part of, 685 forecasting, 691 as measure in TOC (theory of constraints), 462 in NPV (net present value) analysis, 610 present value of, 162–163, 165 and tax shelter schemes, 702 and working capital, 702 Cashing out, 667 Cash receipts journals, 675 Casting, 204 Catalogs, 582 Categories, 477 Category lists, in trade-off studies, 472 Catholic clerics, 672, 673 Causality, 33 Cause and effect relationships, 33, 134, 376 CDI (customer desirability index), 77 Cendata, 146 Census, 146 Centerboard hopper feeders, 207 Center for Advanced Purchasing Studies, 157 Centralized benchmarking, 124 Centrifugal hopper feeders, 207 Chain rule, 656 Chambers of commerce, 147 Change, psychology of, 126–127 Channel value, 54 Characteristic matrix, 63–64 Characteristics, 254–255 Charting, 133
SL316X_frame_INDEX_V. VI Page 609 Monday, September 30, 2002 8:17 PM
Index for Volume VI Check sheets, 484 Chemical measurements, 527 Chi-square test, 334, 626 Classification, 254–255 Classified attribute analysis, 422–426 Classified data, analysis of, 421–430 Classified responses, 422 Classified variable analysis, 426–428 Clean opinions (accounting), 671 Clerical process, as measure of quality cost, 493 Closed systems, 35 CNC lathe, 61–63 Coefficient of expansion, 531 COGS (cost of goods sold), 711 in financial benchmarking, 157 in product design cycle, 194 reducing, 545 Color marking sensors, 218 Column interaction tables, 384, 386 Combination design, 415–418 Combinex method, 589–591 Commerce Business Daily, 147 Commercial cost, 570 Commercial credit ratings, 698 Commodity management organizations, 15 Common stocks, 696–697 Company life cycle, 133 Comparators, 527 Compensation costs, 36 Competition and DFSS (design for six sigma), 717 and earnings, 693 and product demand, 662 Competitive assessments, 82, 83–84 Competitive best performers, 143 Competitive bidding, 10 Competitive evaluations, 118–119, 131 Competitive quality ratings, 487 Competitive Strategy (book), 99 Competitors, 118 Complaints handling, 61–62 indices for, 484 processing and resolution of, 484 Complex reliability block diagrams, 323–325 Components, 205 costs of, 571–572 levels of, 442 tolerance levels of, 447–454 Component testing, 266 Component view, 238 Composite credit appraisal, 698 Compound growth rates, 711 Comptrollers, 503 Computer databases, 146
609 Computer formats, 339 Concept FMEA, 224, 262 Concept phase, 295 Concurrent engineering, 199, 468 Condition, statement of, see balance sheets Conduction, 179 Conference method, 513–515 Confidence level around estimation, 409 of demonstration tests, 312 Configuration, probability of, 637–638 Conformance, 29–30, 112 Conjoint analysis, 88 in DFSS (design for six sigma), 715, 718 empirical example of, 90–94 hypothetical example of, 89–90 managerial uses of, 95–95 Constant dollars, 484 Constant rate failure, 619 Constant-stress testing, 305–306 Constraints, 180, 457–458, 463–465 Construction contractors, 147 Consultants, 148 Consumer's risk, 313 Consumer groups, 147 Contamination, 289 Continuous production flow manufacturing, 207–208 Continuous time waveform, 621 Continuous transfer manufacturing, 206 Contra account, 684 Contra asset, 684 Contractors, 358 Contribution margin analysis, 706 Control charts, 484, 621–624 Control factors, 393, 411 in DFSS (design for six sigma), 719 in monitoring team performance, 33 and noise interactions, 337 Controlled radius, 522 Convection, 179 Conventional dimensioning, 518 Conventional tolerancing, 518 Conveyors, 206 Cooperation, see partnering Coordinate measuring machines, 527 Copper, 531 Copper plating, six sigma in, 6 Corporate culture, 127–128 Corporate general interest buyers, 117 Corporate growth, 663 Correlation matrix, 83 Corrosive materials, 289, 294 Cost analysis, 156, 485 Cost benchmarking, 123
SL316X_frame_INDEX_V. VI Page 610 Monday, September 30, 2002 8:17 PM
610
Six Sigma and Beyond: The Implementation Process
Cost-function worksheets, 566 Cost of goods sold. see COGS Cost of non-quality, 101 Cost of sales, 703, 711 Costs, 101 actual, 478–480, 568, 703 alternative, 703 analyzing, 156 appraisal, 101, 482, 489 benefits of DFM/DFA (design for manufacturability/assembly) on, 189 comparison reports, 480 of components, 571–572 definition of, 569 design, 569 differential, 703 direct, 36, 703 and earnings, 693 elements of, 571 of engineering changes, 297–298 estimated, 703 external failure, 101, 483, 491 extraordinary, 703 fixed, 569, 703 freight, 570 functional area, 573 and functions, 580 of goods, 683–684 historical, 679, 703 imputed, 703 incremental, 569, 703 indirect, 36, 703 internal failure, 101, 483, 490 joint, 703 manufacturing, 569, 571 monitoring system, 478 noncontrollable, 703 opportunity, 703 out of pocket, 704 per dimension, 572 per functional property, 572–573 period, 704 per period of time, 572 per pound, 572 prepaid, 704 presentation formats for, 485 prevention, 101, 482, 488 prime, 704 of processes, 571–572 product, 704 production, 704 of product unreliability, 294 quantitative, 572–573 reducing, 480 replacement, 680, 704
and revenues, 711 of sales, 711 sources of information, 570 standard, 478, 570, 704 sunk, 704 in theory of firm, 662 vs. throughput, 461 tolerance limit, 480 total, 570 and value, 558 and value control, 556 variable, 704 variance of, 480 visibility of, 564–565, 568 Costs of quality, see quality costs Cost/time analysis, 134 Cost visibility, 568 in cost-function worksheets, 564, 566 techniques, 571–573 Counterpart characteristics, 73 Counters, 219 County courthouses, 147 Coupled matrix, 657 Court records, 147 Covariance, 650–651 Coverage ratios, 692 Crashes, 177 Crash programs, 194–196 Creative phase (job plans), 582–584 Credit appraisal, 698 Credit balance, 664 Credits, 664–665 in business assessments, 136 in business transactions, 672 recording, 675 revolving, 666 using, 673 Critical condition indicators, 219 Critical design review, 466 Critical success factors, 129 Crosby, P., 481 Cross-functional teams, 472, 604 CTP (process characteristics), 510 CTQ (quality characteristics), 510, 719 Cumulative density function, 618 Cumulative distributions, 170–171, 640 Cumulative frequency, 422 Cumulative rate of occurrence, 425–426 Current assets, 665 in annual reports, 671 net changes in, 670 ratio of total liabilities to, 697 Current controls, 282 Current liabilities, 665–666 analysis of, 691
SL316X_frame_INDEX_V. VI Page 611 Monday, September 30, 2002 8:17 PM
Index for Volume VI in annual reports, 671 and cash flow, 702 net changes in, 670 Current ratio, 691 Current value, 680 Customer attributes, 201–202 Customer axis, 77–78 Customer desirability index (CDI), 77 Customer duty cycles, 291 Customer requirement planning matrix, 72 Customer requirements, 85–88 Customers customer axis, 77–78 in evaluation of competitive products, 203 fast response to, 107 overarching, 54 perception of performance vs. importance, 131 perception of quality, 117–119 in process FMEA, 269 processing and resolution of complaints, 484 roles in customer/supplier relationship, 13 satisfaction of, see customer satisfaction service hot lines for, 118 surveying, 118–119 types of, 229 view on quality, 114 voice of, 73, 83 wants and needs of, 53–54, 228–230 Customer satisfaction, 74 and benchmarking, 104 vs. customer service, 49–51 in expanded partnering, 25 levels of, 49 vs. loyalty, 49 in partnering, 23, 25 and product design, 113–114 and product performance, 288 and product reliability, 292–293 scorecard, 718 Y relationship to, 718 Customer service, see services Customer/supplier relationship, 11–14 checklists of, 21–23 improving, 20–21 interface meetings in, 16 major issues with, 19–20 Customs and traditions, 582
D DAA (dimensional assembly analysis), 199 DaimlerChrysler, 203, 296–297 Dana Corp., 54 Databases, 146 Data failure distribution, 633
611 Data processing, 6, 493 Data recording, 526 DDB (double declining balance) method, 686–687 Death spiral symptom, 461 Debit balance, 664 Debits, 664–665 in business transactions, 672 recording, 675 using, 673 Debt in annual reports, 671 and equity, 692, 697 long-term, 667 net reduction in, 670 in theory of firm, 661 Debt/assets ratio, 692 Decay time, 618 Decentralized benchmarking, 124 Decimal dimensions, 520 Decision analysis, 610 Decline and Fall (book), 661 Decline period, of product life cycle, 699–700 Decoupled designs, 542 Decoupled matrix, 657 Defective parts, 278 Defect matrices, 485 Defects, 209–210 detecting, 216 examples of, 213 matrices for, 484 as measure in TOC (theory of constraints), 462 mistakes as sources of, 212–213 preventing, 216 quality defects, 291 reliability defects, 291 zero, 483 Defense Technical Information Center, 340 Deferred compensation, 667 Deferred income taxes, 669 Deflection, 654–655 Deformations, 177 Degrees of freedom, 383, 407, 428 Dell Corp., 169 Delphi Automotive Systems, 169 Demand and earnings, 693 factors affecting, 129 and sales forecasting, 710 Deming, W.E., 110, 480–481 Deming management method, 110–111 Deming wheel, 111–112 Demographic data, 146 Density, 180 Density function, 633 Departmental budgets, 711–712
SL316X_frame_INDEX_V. VI Page 612 Monday, September 30, 2002 8:17 PM
612
Six Sigma and Beyond: The Implementation Process
Department benchmarking, 123 Department of Defense, 555 Depreciation, 684 accelerated, 687 accumulated, 665–666, 684 in cash flow analysis, 701 as expenses, 665, 669, 684 as part of cash flow, 685 of physical assets, 681 replacement cost, 687 straight line, 685–686 sum-of-the-years' digits (SYD), 686 as tax strategy, 684–-685 as valuation reserve, 684 Derating, 359, 362 Descriptive feedback, 31, 33 Design controls, 249, 265–266 Design cost, 569 Design customers, 229 Design engineering, as measure of quality cost, 493–494 Design engineers, 269 Design FMEA, 224–225, 262 see also FMEA (failure modes and effects analysis) calculating RPN (risk priority number) in, 267 describing anticipated failure modes in, 264 describing causes of failure in, 264–265 describing effect of failure in, 264 describing functions of design/product in, 264 detection table, 252 in DFSS (design for six sigma), 721 estimating failure detection in, 266–267 estimating frequency of occurrence of failure in, 265 estimating severity of failures in, 265 failure modes, 240–244 forming teams for, 263 functions of, 264 identifying system and design controls in, 265 linkages to process FMEA and control plans, 258–260 objectives of, 263 occurrence rating, 249 purpose of, 265 in QFD (quality function deployment), 725 recommending corrective actions in, 267–268 requirements for, 263 severity rating, 246 special characteristics for, 257 timing, 263 Design for manufacturability/assembly, see DFM/DA Design for six sigma, see DFSS Design margins, 359–360
Design of experiments, see DOE Design optimization, 178, 182–185 Design parameters, 336–337, 542, 656 Design phase, 5 Design reliability, 313 Design requirements, 87–88 Design reviews, 464 checklists of, 468 definition of, 362 FMEA (failure modes and effects analysis) in, 467 objectives of, 466 in R&M (reliability and maintainability), 352 sequential phases of, 465–467 in system/component level testing, 266 Designs, 183 axiomatic, 541–544 existing, 544 extensions and changes to, 544 new, 544 Design sets, 184 Design synthesis, 38–39 Design variables, 183–185 Destructive testing, 529 Detect delivery chutes, 219 Detection ratings, 250 and lowering risks, 267–268, 276 vs. occurrence ratings, 253 in surrogate machinery FMEAs, 282 Detectors, 216–219 Development risk, 195 Deviations, 91–93 Dewhurst. Peter, 202–203 DFM/DFA (design for manufacturability/assembly), 187–189 approach alternatives to, 198–199 business expectations from, 189–190 charters, 193 and cost of quality, 509–510 effects on product design, 204 elements of success in, 192–194 fundamental design guidance for, 204–205 instruction manuals for, 199, 200–203 mechanics, 199 objectives of, 187–189 in product design, 195, 204 product design in, 194 product plans in, 194 and product reliability, 298 sequential approach to, 191 simultaneous approach to, 191 tools and methods for, 198–199 use of human body in, 199–200 DFSS (design for six sigma), 9
SL316X_frame_INDEX_V. VI Page 613 Monday, September 30, 2002 8:17 PM
Index for Volume VI and APQP (advanced product quality planning), 45–47 and AQP (advanced quality planning), 40–47 and cost of quality, 510 essential tools in, 715–716 implementing with project management, 605–609 model, 716 partnering in, 9–25 physical and performance tests, 722–723 and project management, 608–609 quality engineering approach in, 25–33 and R&M (reliability and maintainability), 364–365 and reengineering, 516–517 and simulation, 185 stages in, 717–723 systems engineering in, 34–40 and TOC (theory of constraints), 463 transfer function in, 52 transformation functions in, 717–718 Diameter, 522 Difference between two means, 655–656 Differential costs, 703 Differentiation strategy, 101–102, 110 Digital Equipment Corp., 203 Digital signal processing, 622 Digitizing, 178 Dimensional assembly analysis (DAA), 199 Dimensional mistakes, 214 Dimensioning, 518–522 Dimensions, 522 Direct costs, 36, 703 Direct labor, 569 Direct magnitude evaluation (DME), 586 Direct materials, 569 Directories, 145–146 Direct product competitor benchmarks, 122 Discrete time, 621 Discretionary funds, 700 Discriminant analysis, 699 Discriminators, 477 Displacements, 179–180 Displacement sensors, 218 Disposable razor, failure modes in, 66–67 Distribution, 54 Diversity, in team systems, 29–30 Dividends, 670, 700 DMAIC model, 7 DME (direct magnitude evaluation), 586 Documentation, 40, 474 DOE (design of experiments), 367–370 see also experiments analysis, 405–420 ANOVA (analysis of variance), 407–410
613 classified data, 421–430 combination design, 415–418 graphical analysis, 405–407 signal-to-noise (S/N) ratio, 411–415 comparisons using, 369 confirmatory tests in, 418–421 definition of, 362 in DFM/DFA (design for manufacturability/assembly), 199 dynamic situations in, 430–441 group runs using, 369 loss function in, 397–398 parameter design, 441–447 planning, 372–380 in reliability applications, 335–336 setting up experiments, 380–395 signal-to-noise (S/N) ratio in, 403–404 Taguchi approach, 370–371 tolerance design, 447–454 Dominance factors, 273 Donneley demographics, 146 Door intrusion beams, 177 Double declining balance (DDB) method, 686–687 Double-entry bookkeeping, 672, 675 Double feed sensors, 218 Dow Jones, 146 Downtimes, 238, 278 Dry run, 362 Duane model, 361 Dun and Bradstreet, 146, 698 Dupont Connector Systems, 6 DuPont system, modified, 157, 704–705 Durability, 112 Durability life, 289 Dust, 289 Dynamic analysis, 176 Dynamic process, vs. static process, 1–2 Dynamic situations, 430–441
E Earning ratios, 693–695 Earnings, 692–693 and change management, 127 and luck, 693 retained, 670 Earnings before interest and taxes (EBIT), 668 Earnings per share (EPS), 662, 669 EBIT (earnings before interest and taxes), 668 Economic buyers, 117 Economic order quantity (EOQ) model, 707 Economies of scale, 155 Effort goals, 159 Eigenvalues, 177 Eigenvectors, 177
SL316X_frame_INDEX_V. VI Page 614 Monday, September 30, 2002 8:17 PM
614
Six Sigma and Beyond: The Implementation Process
Eight-level factors, 389 Elastic buckling, 176 Elasticity, 177, 179 Elastic of modulus, 654 Electrical design margins, 359 Electrical discharges, 289 Electrical measurements, 527 Electroforming, 204 Electronics industry, 2, 5 Element connectivities, 180 Element data recovery, 181 Element properties, 180 Elevating hopper feeders, 207 Emission standards, 54 Employees, 663 and benchmarking, 104, 107 motivation and earnings of, 693 Enclosures, 358 Encyclopedia of Business Information Services, 145 Encyclopedias, 145 Engineering in business assessments, 136–137 conformance elements in, 502 manufacturing, 494 nonconformance elements in, 503 plant, 494–495 Engineering analysis, 266 Engineering changes, costs of, 297–298 Enhancing functions, 58–59, 264 Environmental controls, 526 Environmental FMEA, 225 Environmental laws, 54 Environmental Protection Agency (EPA), 147 EOQ (economic order quantity) model, 707 EPS (earnings per share), 662, 669 Equal bilateral tolerance, 523 Equipment, 362, 675 Equipment errors, 526 Equity in balance sheet equation, 664, 674 and debt, 692 ratio to total debt, 697 of shareholders, 667 in theory of firm, 661 Equity/debt ratios, 692 Equity earnings, 484 Erlicher, Harry, 555 Errors, 526–527 eliminating, 212 inevitability of, 212 proofing, 208, 274 variables, 336–337 Essential functions, 58–59 Esteem value, 558
Estimated costs, 703 Euclid, 542 Euler buckling analysis, 176, 177 Evaluation phase (job plans), 585–591 Evaluation summary, 587 Evidence books, 475, 477 Excel (software), 182 Exchange value, 558 Excitement needs, 229 Excitement quality, 69–71 Executives, 663 Expanded partnering, 12–14 Expansion, coefficient of, 531 Expected customer life, 289 Expenditures, 680 Expenses, 701 vs. assets, 680–681 depreciation as, 684 and productivity, 702 Experiments, 249 see also DOE (design of experiments) analysis in, 405–410, 415–418 column interaction tables in, 384 confirmatory tests in, 418–421 degrees of freedom in, 383 dynamic situations in, 430–441 factor levels in, 380–382 factors with large numbers of levels in, 392 factors with three levels in two-level arrays in, 391 factors with two levels in three-level arrays in, 390–391 hardware test setups in, 385–386 inner and outer arrays in, 393 linear graphs in, 382–384 nesting of factors in, 392 orthogonal arrays in, 383–384 parameter design, 441–447 planning, 372–380 randomization of tests in, 394 test arrays in, 387–389 tolerance design, 447–454 Exponential distribution, 617, 641 in fixed-sample tests, 318–320 in reliability problems, 618 in sequential tests, 321–323 Exponential function, 619 Extended interior penalty functions, 185 External failure costs, 101, 483, 491 External gate hopper feeders, 207 External manufacturing, 6–7 External variations, 28 Extraordinary costs, 703 Extrusion, 204
SL316X_frame_INDEX_V. VI Page 615 Monday, September 30, 2002 8:17 PM
Index for Volume VI
F F.W. Dodge reports, 145 Fabrication process, 206 Factors, 380 ANOVA (analysis of variance) decomposition of, 410–411 choosing number of levels, 380–382 decomposition of, 410 effects of, 424–425 eight-level, 389 four-level, 389 nesting of, 392 nine-level, 390 test matrix for, 377 three-level, 385, 391, 392 in three-level arrays, 390–391 two-level, 390 types of, 393 Facts, and change management, 127 Fail safe design, 208 Failure, 362 causes of, 246–247, 264–265, 272–273 constant rate, 619 costs of, 483, 490–491 cumulative function, 635 detecting, 267, 274–275 effects of, 264, 271–272 free time, 643 logs of, 282 methods of determining, 247–249 occurrence rating, 249, 265, 273 probability of, 618–620 severity of, 265, 273 user costs, 484 Failure Definition and Scoring Criterion (book), 288 Failure modes, 240 cause and occurrence, 246–247 describing, 264, 270–271 design controls for, 249–250 detection rating, 250 determining, 66, 247–249 effects of, 243–245, 278 examples of, 242–243 in FMEA (failure modes and effects analysis), 242–244 in function diagrams, 66 in machinery FMEA, 277–278 process controls for, 249–250 severity rating, 244–245 Failure modes and effects analysis, see FMEA Failure rate, 633 conversion to MTBF (mean time between failures), 361
615 in failure-truncated tests, 318 as measure of product reliability, 290 and product life, 293–295 in R&M (reliability and maintainability), 355 and system failure, 629 Failure reporting, analysis, and corrective action system (FRACAS), 341, 352, 362 Failure-truncated tests, 318–319 FAST (functional analysis system technique), 567, 577–580, 712 Fatigue, 294 Fault tree analysis, see FTA Fax machines, in customer/supplier communications, 13 FEA (finite element analysis), 175 analysis procedure in, 178–179 common problems in, 182 definition of, 362 input to models in, 180 outputs from, 180–181 procedures in, 178 solution procedure in, 179–180 techniques in, 182 types of, 176–177 Feasibility, 362 Feasible designs, 183–184 Feature of size, 522 Features, 112, 522 Federal Database Finder, 147 Federal depository libraries, 147 Federal Reserve banks, 147 Feedback, 30–31 descriptive, 31, 33 loops, 30–31 negative, 35 positive, 35 systems, 31, 35 Feeders, 207 Feelings, and change management, 127 Fiber optical tests, 530 Fiber sensors, 218 Field performance data, 487 Field service reports, 279 FIFO (first-in first-out) method), 683 Finance, as measure of quality cost, 495 Financial analysis, 704–709 breakeven analysis, 704–705 contribution margin analysis, 706 EOQ (Economic Order Quantity) model, 707 IRR (internal rate of return) method, 709–710 modified duPont system, 704–705 NPV (net present value) method, 709 price-volume variance analysis, 707 return on investment analysis, 708–709 ROI (return on investment), 708–-709
SL316X_frame_INDEX_V. VI Page 616 Monday, September 30, 2002 8:17 PM
616
Six Sigma and Beyond: The Implementation Process
Financial assets, see also assets Financial benchmarking, 157 Financial forecasting, 688 Financial leverage in annual reports, 672 and earnings, 692 in financial comparison, 131 in modified duPont formula, 704–705 in rating bonds, 695 in rating stocks, 697 ratios, 692 Financial management rate of return (FMRR), 709–710 Financial planning, 710–712 Financial position, statement of, see balance sheets Financial rating companies, 695 Financial rating systems, 695 bond rating companies, 695–696 commercial credit ratings, 698 Financial ratios, 145 Financial reports, 146, 664 accountants' report, 671 annual reports, 671 audited, 671 balance sheets, 664–665 Financial statement analysis, 688 Finished product inspection, 528 Finite element analysis, see FEA Finite elements, 175–176 Firm, theory of, 661–662 First-in first-out (FIFO) method, 683 Fishbone diagram, 134, 348, 373 Fixed assets, 665–666 accumulated depreciation of, 684 in cash flow analysis, 701 in financial comparison, 131 as noncurrent assets, 667 Fixed burden costs, 569 Fixed costs, 569 in breakeven analysis, 704–705 in contribution margin analysis, 706 definition of, 703 Fixed-sample tests, 314 using binomial distribution, 315–316 using exponential distribution, 318–320 using hypergeometric distribution, 315 using Poisson distribution, 316 using Weibull and normal distributions, 320 Florida Power and Light, 143 Flow charts, 61–63, 153–154 Fluid mechanics, 177 FMEA (failure modes and effects analysis), 223–224 action plans in, 253–258 in benchmarking, 134
benefits of, 226 common problems in, 260–262 in core engineering process, 299 definition of, 224, 362 design FMEA, see design FMEA design/process controls in, 250 in design reviews, 467 detection rating in, 250 in DFM/DFA (design for manufacturability/assembly), 199 in DFSS (design for six sigma), 716, 721 failure mode analysis in, 240–242 forms, 235–238 vs. FTA (fault tree analysis), 469 function concepts in, 53, 64–68 getting started with, 228–235 history of, 226–227 initiating, 227–228 learning stages in, 262 machinery, see machinery FMEA in problem solving, 225–226 process, see process FMEA in quality lever, 227 scopes of, 236 steps of, 469 transferring causes and occurrences to forms, 250 transferring current controls and detection to forms, 254 transferring RPN to forms, 256 transferring severity and classification to forms, 248 types of, 224–225 typical body, 237 typical header, 236 FMRR (financial management rate of return), 709–710 Focus groups, 131, 147 Footnotes, in balance sheets, 670 Force, 527 Force field analysis, 128–129 Ford Motor Co., 41, 169, 203, 296–297 Forecasts in expanded partnering, 17 financial, 688 as measure of quality cost, 495 sales forecasting, 710 technology forecasting, 156–157 Forgetfulness, 210 Forging, 204 Fork lifts, 207 Formal qualification review, 466 Four-level factors, 389 FRACAS (failure reporting, analysis, and corrective action system), 341, 352, 362
SL316X_frame_INDEX_V. VI Page 617 Monday, September 30, 2002 8:17 PM
Index for Volume VI F ratio statistical test, 408 Freedom of Information Act, 147 Free-state conditions, 522 Freight costs, 570 Frequency distributions, 170 Friction, 179, 294 FTA (fault tree analysis), 299 definition of, 362 in design FMEA, 267 in determining causes of failures, 248 vs. FMEA (failure modes and effects analysis), 469 in QFD (quality function deployment), 725 in R&M (reliability and maintainability), 348, 355 seven-step approach to, 469–470 Fuji-Xerox, 143 Functional analysis, 156 Functional analysis system technique (FAST), 567, 577–580, 712 Functional area costs, 573 Functional benchmarking, 122, 123 Functional requirements, 541–543 Function journals, 145 Functions, 52–53, 574 alternatives to, 558–559 analysis and evaluation, 567–568, 575–580 basic functions, 574–575 and costs, 580 definition of, 52, 230 of designs, 264 determining, 567, 573–574 developing, 238 diagrams, 55–56 as dimension of product quality, 113 enhancing, 58–59 essential, 58–59 evaluating, 580–582 failure modes, 66–67 in FMEA (failure modes and effects analysis), 64–68, 230 objective, 183 organizing, 239–240 penalty, 185 in product flow diagrams, 56–61 of products, 264 in QFD (quality function deployment), 64–68 secondary, 575 task, 58–59 terminus, 66–67 tree structure, 239–240 types of, 264 in VA (Value Analysis), 64–68 in value analysis, 557 in value control, 567–568, 573–574
617 values, 581–582 Function tree process, 239–240 Functives, 55–56 Funds in annual reports, 671 in balance sheets, 669 discretionary, 700 sources of, 661, 669 use of, 670
G G. Heilman Brewing, 99 GAAP (generally accepted accounting principles), 672 Gages, 527 accuracy, 533 blocks, 531–533 in hierarchy of standards, 525 linearity, 533 repeatability, 533 reproducibility, 533 stability, 533 Galbraith, John Kenneth, 663, 681 Gale Research, 145 Gamma distribution, 625–631 Gamma functions, 626–631 Gamma ray tests, 530 Gap analysis, 158–159 Gasoline fumes, 289 Gates, 219 GD&T (geometric dimensioning and tolerancing), 199, 518–523 General design standards, 338 General Electric Co., 555 General journals, 675 General ledgers, 676 Generally accepted accounting principles (GAAP), 672 General Motors Corp., 296–297 annual report (1986) of, 670 AQP (advanced quality planning) in, 41 ROE (return on equity), 99 General services, as measure of quality cost, 501 Generic benchmarking, 122 Generic products, 89–94 Geometric analysis, 176 Geometric dimensioning and tolerancing (GD&T), 199, 518–523 Geometry, 180 Goals, 159 characteristics of, 159 customer-oriented, 131 guiding principles, 160 interdepartmental, 161
SL316X_frame_INDEX_V. VI Page 618 Monday, September 30, 2002 8:17 PM
618
Six Sigma and Beyond: The Implementation Process
philosophy in setting, 159–160 in project management, 608 results vs. effort, 159 service/quality, 131 strategic, 19 structures of, 160–161 tactical, 19 Goals down-plans up (forecasting), 710 Goodwill, 491, 666 Gorton Fish Co., 169 Government, 663 Government Printing Office Index, 147 Government regulations, 129 Graham, Benjamin, 697–698 Graphical analysis, 405–407 Graph transmissions, 535–537 Grid point data recovery, 181 Gross assets, 695 Gross profit, 668 Gross profit margin, 130, 157, 668 Groups, see teams Group technology (GT), 199 Growth period, of product life cycle, 699–700 Growth rates, 663, 711 GT (group technology), 199 Guide rods/pins, 219 Guide to the Project Management Body of Knowledge (book), 599
H Habits, and productivity, 582 HALT (highly accelerated life test), 310 Handbooks, 145 Hardened tool steel, 531 Hardness testers, 527 HASS (highly accelerated stress screens), 310 Hazard rate, 294, 634–635, 643 Heat transfer, 179, 357 Heavy equipment industry, 200 Help-seekers, in team systems, 25 Hidden costs, 36 Hidden factories, 36 Highly accelerated life test (HALT), 310 Highly accelerated stress screens (HASS), 310 Histograms, 484 Historical costs, 679, 703 Historic data, 133, 266 Homeostasis, 31 Hood buckling, 177 Hopper feeders, 207 Horizontal beam deflection, 654–655 House of Quality matrix, 73, 140–141 Human body, in DFM/DFA (design for manufacturability/assembly), 199–200
Human mistakes, 210–212 Human resources conformance elements in, 506 in customer/supplier relationship, 22 in expanded partnering, 24 nonconformance elements in, 506 in partnering, 24 Humidity, 289, 526 Hypergeometric distribution, 315
I IBM, 109–110, 195 Identification mistakes, 210–211 Image, in quality, 113 Implementation phase (job plans), 591–592 attitude, 596 audit results, 597 goals, 591–592 organization, 594–595 plans, 592–593 principles, 593–594 system evaluation, 593 value council, 596–597 Importance/feasibility matrix, 141 Importance/performance analysis, 131 Importance rating, 84 Improvement potential, 141 Imputed costs, 703 Inadvertent mistakes, 211 Incentives, in benchmarking, 104–105 Inch dimensions, 520–521 Income after taxes, 484 Income before extraordinary items, 668–669 Income before nonrecurring items, 668–669 Income before taxes, 668 Income from continuing business, 668 Income statements, 664, 667–668 in annual reports, 671 ratio analysis of, 690 in summary of normal debit/credit balances, 674 Income taxes, 683–685 Incoming material inspection, 528 Incremental costs, 569, 703 Independent quality rating, 487 Indexing mechanisms, 206 Indicators, 21, 527 Indirect costs, 36, 703 Indirect labor, 569 Indirect materials, 569 Industrial cleanser, 89–94 Industrial engineering, 508 Industrial state, 663 Industry analysis, 129–130
SL316X_frame_INDEX_V. VI Page 619 Monday, September 30, 2002 8:17 PM
Index for Volume VI Infant mortality period, 293 Infeasible designs, 183–184 Inflation, 679 and inventories, 682–683 and sales forecasting, 710 Information collecting, 564 in customer/supplier relationship, 22 in expanded partnering, 24 Information brokers, 148 Information phase (job plans), 563–564 cost visibility, 564–565, 568 functions in, 567–568, 573–574 information collection, 564 project scope, 565–567 Information systems and management in business assessments, 139 conformance elements in, 508 nonconformance elements in, 509 in partnering, 22, 24 Information theory, 542 Informative inspection, 217 Inherent availability, 292 In-house reviews, 467 Injuries, costs of, 36 Inland Steel, 99 Inner arrays, 393 Innovations, levels of, 549 In-process inspection, 528 Input, 27, 61 Input output method, 577 Inspections in classifying characteristics, 529 interpreting results of, 530 points, 528 process in, 206 purpose of, 528 stations, 487 techniques in, 217 types of, 528–529 Instruments, 525 Intangible assets, 667 Integrator approach, 199 Intellectual property, 22 Intelligence Tracking Service, 146 Intentional mistakes, 212 Interdepartmental goals, 161 Interest income, 668, 685 Interface matrix, 258–260, 279 Interference, 360 Interference testing, 321–323 Interim design review, 466 Intermittent transfer manufacturing, 206 Internal assessments, 132 Internal benchmarking, 122
619 Internal Internal Internal Internal Internal Internal
best performers, 143 failure costs, 101, 483, 490 manufacturing, 5–6 organizations, for partnering, 14–15 processes, 53 rate of return (IRR) method, 611–612, 709–710 Internal standards and tests, 79 Internal variations, 29 International System of Units (SI), 520 Internet, in customer/supplier communications, 13 Intrinsic value, of assets, 680 Inventories, 682 cycles, 707 determining value of, 682–683 Economic Order Quantity (EOQ) model, 707 as internal failure cost, 490 Inventory control systems, 159 Inventory profits, 458 Inverse power model, 307–308 Inversion method, in problem-solving, 583 Investments, 458 assets as, 680 bonds, 695–696 capital, 661 and depreciation, 685 rating systems for, 695–696 stocks, 696–698 Invoice, 675 IRR (internal rate of return) method, 611–612, 709–710 Ishikawa diagram, 134 ISO 9000 certification program, 42 ISO/TS 19469 certification program, 42
J Jaguar, 296–297 JIT (just-in-time method), 199 Job plans, 559 creative phase, 582–584 evaluation phase, 585–591 implementation phase, 591–597 information phase, 563–568, 573–574 steps in, 561–562 vs. techniques, 562–563 Job shops, 207 Johnson Controls, 54 Joint costs, 703 Joint stiffness evaluation, 177 Joint ventures, 196 Judgment inspection, 217 Juran, J., 480 Just-in-time (JIT) method, 199
SL316X_frame_INDEX_V. VI Page 620 Monday, September 30, 2002 8:17 PM
620
Six Sigma and Beyond: The Implementation Process
K Kaizen method, 142, 160 Kano model, 68–71 basic quality depicted in, 70 of customer needs, 229 in DFSS (design for six sigma), 715, 718 excitement quality depicted in, 70–71 performance quality depicted in, 70 and transformations, 53 Kepner Trago analysis, 248 Key life testing, 299 K factor, 4 Kolmogov-Smirnov test, 334
L L.L. Bean, 124, 143 Labor, 569 Laboratories, 487 Laboratory errors, 526 Lack of standards mistakes, 211 Landlords, 147 Last-in first-out (LIFO) method, 683–684 Law of maldistribution, 585 Laws of mechanics, 542 LCC (life cycle costs), 348 definition of, 363 in R&M (reliability and maintainability), 356–7 Leadership, in partnering, 21–24 Lead facilitators, in trade-off studies, 472–473 Lead times, 74 Leapfrog approach, 196 Learning curves, 155 Leasing agents, 147 Least cost strategy, 100–101 Ledgers, 676 Legal department conformance elements in, 509 as measure of quality cost, 495 nonconformance elements in, 509 Legislative summaries, 147 Leverage in annual reports, 672 and earnings, 692 in financial comparison, 131 in modified duPont formula, 704–705 in rating bonds, 695 in rating stocks, 697 ratios, 692 Liabilities accrued pension, 667 in balance sheet equation, 664, 674 current, 665–666
and current assets, 697 increasing, 669 noncurrent, 667 pension, 667 Life cycles, 133 of companies, 133 definition of, 363 of products, see product life cycle LIFO (last-in first-out) method, 683–684 Limits dimensioning, 523 Lindbergh, Charles, 554 Linear graphs, 382–384, 386 Linear measurements, 527 Linear static analysis, 177 Line elements, 176 Line organizations, 15 Liquid assets, 691 Liquidation value, 679 Liquidity, 665, 671 in financial comparison, 131 ratio analysis of, 691–692 Liquidity ratios, 691–692 Loads, 180, 181 Long-term debts, 667 Long-term process variation, 4–5 Loops, 184, 538 Loop transmission, 538 Losses, 671 and cash flow, 702 controlling, 36 as part of transactions, 672 Loss function, 397–398 calculating, 398–402 for LTB (larger-the-better) situations, 402 vs. process performance (Cpk), 402–403 and signal-to-noise (S/N) ratio, 403–405 for STB (smaller-the-better) situations, 401 Low safety factors, 294 LTB (larger-the-better), 402, 413 Lubricants, 33, 289 Luck, and earnings, 693
M Machine condition signature analysis (MCSA), 363 Machine customers, 229 Machinery FMEA, 224–225, 277 see also FMEA (failure modes and effects analysis) classification in, 279 current controls, 282 detection ratings, 282 failure modes, 277–278 and FTA (fault tree analysis), 348
SL316X_frame_INDEX_V. VI Page 621 Monday, September 30, 2002 8:17 PM
Index for Volume VI identifying functions, 277 identifying scopes of, 277 occurrence ratings in, 282 potential causes in, 279 in R&M (reliability and maintainability), 351–352, 361 recommended actions in, 283 RPN (risk priority number), 282–283 severity rating, 279 Machining, characteristic matrix of, 63–64 MacLaurin series, 646, 649 Magnetic disk feeders, 207 Magnetic elevating hopper feeders, 207 Magnetic fields, 289 Magnetic particle tests, 530 Maintainability, 292, 338, 363 Maintenance records, 282 Major parts standards, 338–339 Malcolm Baldrige National Quality Award, 105 Maldistribution, law of, 585 Management benchmarking in, 98, 103–104, 124 and budgets, 662 in business assessments, 138 and earnings, 693 as measure of quality cost, 496 operational, 601 in partnering, 19 roles in customer/supplier relationship, 21–22 and security, 663 systems concept in, 35–-37 in theory of firm, 662 Management process benchmarking, 122–123 Manpower, 483 Manuals, 145 Manufacturing-based view, 114 Manufacturing cells, 206 Manufacturing cost, 569, 571 Manufacturing engineering, 296 Manufacturing engineering sign-of approach, 199 Manufacturing engineers, 269 Manufacturing process, 206 approaches to, 207–208 in business assessments, 136 categories of, 206, 206–207 as cause of product failures, 291 conformance elements in, 506–507 controls, 273–274 costs, 569 costs of, 571 design-related factors in, 197 external, 6–7 factors affecting, 197–198 functions, 269–270 improving, 489
621 internal, 5–6 as measure in TOC (theory of constraints), 462 nonconformance elements in, 507 one in, one out, 207 product design-related factors affecting, 197 in R&M (reliability and maintainability), 350 schematic diagram, 206 secondary, 206 and theory of non-constraints, 464 Margin/fit problems, 177 Marketable securities, 671, 681 Marketing advantages in, 74 conformance elements in, 505–506 as measure of quality cost, 497 nonconformance elements in, 506 Market niches, 54 Market research, 148 in DFSS (design for six sigma), 715, 718 in product development, 295 as source of benchmarking information, 144 in surveys, 118 Market segmentation, 102 in benchmarking, 125 in DFSS (design for six sigma), 717 planning, 45 Market segments, 54 Market share, 696 Mass, 527 Massachusetts Institute of Technology, 541–543 Mass production, 518, 726 Master Belt, in dealing with projects, 661 Master Black Belt, in dealing with projects, 661–661 Materials, 569 analysis of, 176 in business assessments, 136 direct, 569 errors in, 526 handling, 206–207 indirect, 569 as input in team systems, 26 as measure of quality cost, 497–499 properties of, 180 raw, 487, 523 as source of mistakes, 212 in TOC (theory of constraints), 458 Mathematical modeling, 266 Matrix analysis, 587–589 Maturity period, of product life cycle, 699–700 Maytag, 99 MCSA (machine condition signature analysis), 363 Mean deflection, 654 Means, difference between two, 655–656
SL316X_frame_INDEX_V. VI Page 622 Monday, September 30, 2002 8:17 PM
622
Six Sigma and Beyond: The Implementation Process
Mean time between failures, see MBTF Mean time to repair, see MTTR Measurement mistakes, 214 Measurement systems, 524–525 interpreting results of inspection and testing in, 530 mechanical, 527 purpose of inspection in, 528–529 roles of, 525–527 sources of inaccuracy, 526 techniques and equipment, 527–528 testing methods, 529–530 Mechanical design margins, 359–360 Mechanical loads, 178–179 Mechanical measurements, 527 Mechanics, laws of, 542 Medical costs, 36 Mergers and acquisitions, asset values in, 680 Metal detectors, 218 Metric dimensions, 520 Metric tolerance, 520 Metrology, 524–525 interpreting results of inspection and testing in, 530 purpose of inspection in, 528–529 roles of, 525–527 techniques and equipment, 527–528 testing methods, 529–530 Microeconomics, 662 Microinches, 531 Micrometers, 527 Microswitches, 219 Miles, L.D., 555 MIL-HDBK-727 method, 203 Milliken and Co., 143 Millimeters, 520 Minority interest, 667 Mirror image (accounting), 676 Mission statements, 160 Mistake proofing, 208–209 in avoiding workplace errors, 210 devices for, 216–219 equation for success in, 218 inspection techniques in, 217 proactive system approach to, 216 in process FMEA, 274 reactive system approach to, 216 Mistakes, 213–215 causes of, 213–215 detecting, 216–217 examples of, 213 human, 210–212 preventing, 216–217 signals that alert, 215 sources of, 212–213
types of, 213–215 Mistakes of misunderstanding, 210 Mitsubishi method, 200–201 Models and modeling, 178 in engineering analysis, 266 in FEA (finite element analysis), 169 finite element, 180 redesigns of, 181–182 as tool of quality cost, 485 Modified duPont system, 157, 704 Money, see funds Monochrome monitors, 358 Monte Carlo method, 169 Moody's, 146, 696 Motion economy, principles of, 201 Motorola Inc., 1 benchmarking programs in, 109–110, 157 six sigma quality programs, 101 Mounting mistakes, 214–215 MSC/NASTRAN software, 180 MTBE (mean time between event), 354–355 MTBF (mean time between failures), 348 conversion to failure rate, 361 definition of, 363 in failure-truncated tests, 318 and inherent availability, 292–293 machine history of, 349 as measure of product reliability, 290 and occurrence ratings, 282 in R&M (reliability and maintainability), 348, 355 in sequential tests, 321 in time-truncated tests, 319–320 MTTF (mean time to failure), 363, 619 MTTR (mean time to repair), 348 definition of, 363 machine history of, 349 in R&M (reliability and maintainability), 292–293, 355–356 Musts and wants, 477
N Nasser, Jacques, 297 National Electrical Manufacturing Association (NEMA), 358 National Institute of Standards and Technology (NIST), 526 National reference, 525 National standards, 525 National Technical Information Center, 147 Need sets, 53 Negative confirmation, in customer satisfaction, 49 Negative feedback, 31, 35
SL316X_frame_INDEX_V. VI Page 623 Monday, September 30, 2002 8:17 PM
Index for Volume VI NEMA (National Electrical Manufacturing Association), 358 Net assets, 695 Net income, 671 Net present value (NPV) method, 611, 709 Net profits, 459–460 New products, 710 Newsearch, 146 Newsletters, 145, 147 Newspapers, 145 Nine-level factors, 390 NIST (National Institute of Standards and Technology), 526 Node absorption, 539 Noise factors, 393, 411, 721 Noises, 336–337 Nominal dimension, 531 Nominal group process, 114, 132–134 Nominal size, 522 Non-constraints, theory of, 463–464 Noncontrollable costs, 703 Noncurrent assets, 667 Noncurrent liabilities, 667 Nondestructive testing, 530 Non-disclosure agreements, 17 Nonlinear analysis, 176 Nonlinear dynamic analysis, 177 Nonlinear static analysis, 177 Nonprofit organizations, 668 Nonrecurring expenses, 669 Non-rigid parts, 522 Non-statistical controls, 274 Normal density-like function, 647 Normal distribution, 320 in fixed-sample tests, 320 in sequential tests, 323 Normalizing constant, 308 Normal modes analysis, 177 Not invented here syndrome, 110 NPV (net present value) method, 611, 709 NTB (nominal-the-best), 413–415 in loss function, 399 signal-to-noise (S/N) ratio for, 404–405, 431, 439–441 Nuclear radiation, 289 Numerical evaluation, see paired comparisons
O Objective functions, 183 Object oriented analysis and design (OOAD), 515–516 Observed frequency, 422 Occupational safety laws, 54 Occurrence rating, 249
623 see also severity rating in design FMEA, 249 and lowering risks, 267, 276 in machinery FMEA, 282 in process FMEA, 250 reducing, 253 Odd part out method, 219 OE (operating expense), 458, 460–461 OEE (overall equipment effectiveness), 349, 356, 363 Office equipment, accounting of, 675 Omega method, 425 One Idea Club method, 124 One in, one out manufacturing process, 207 Ongoing program/project manager approach, 199 Online databases, 145 OOAD (object oriented analysis and design), 515–516 Open systems, 35 Operating characteristic curve, 313 Operating expense (OE), 458, 460–461 Operating hours, 525 Operating instructions, 73 Operating leverage, 682 Operating margin, 668 Operating profits, 484 Operational management, 601 Operational results, 23, 25 Operations mistakes, 214 Operations process benchmarking, 123 Operator errors, 526 Operator-paced free-transfer machines, 206 Operator to operator errors, 526 Opportunity cost, 195, 703 Optical measurements, 527 Optimal inventory cycle, 707 Optimization algorithms, 184–185 Optimization loops, 184 Optimum design, 183 Organizational change, and benchmarking, 126–129 Organizational suboptimization, 36 Organization expense, as noncurrent assets, 667 Orthogonal arrays, 383–384, 386 OSHA (Occupational Safety and Health Administration), 147 Outer arrays, 393 Outliers, 312 Out of pocket costs, 704 Output in process flow diagrams, 61 of teams, 28 Overall equipment effectiveness (OEE), 349, 356, 363 Overarching customers, 54
SL316X_frame_INDEX_V. VI Page 624 Monday, September 30, 2002 8:17 PM
624
Six Sigma and Beyond: The Implementation Process
Overhead costs, 36, 130 Oxidation, 294
P Pace production line, 206, 208 Packaging, as cause of product failures, 291 Paired comparisons, 141, 586–587 Paper pencil assembly, 60 Parallel reliability block diagrams, 323–325 Parameter design, 371 in DFSS (design for six sigma), 715 in DOE (design of experiments), 441–447 in improving reliability, 336–337 Parameter Design approach, 31–32 Parametric variations, 178 Pareto, Vilfredo, 585 Pareto analysis, 44, 132–133, 484 Pareto voting, 585–586 Partial derivatives, 649 Partnering, 9–11 buyer/supplier relationship in, 11–12 checklists of, 21–23 in DFSS (design for six sigma), 13 expanded, 12–14, 23–25 implementing, 14–19 improving, 20–21 principles of, 11 process managers, 15–17 reevaluating, 17 success indicators, 21 typical questionnaire for, 18 Partnering for Total Quality assessment process, 21 Parts, 205 defective, 278 inclusion of wrong, 214 missing, 214 non-rigid, 522 in product design, 205 Parts/component feeding systems, 207 Part worths, 89–94 PASS (production accelerated stress screen), 310–312 PAT (profit after tax), 693–694 Patents, 147, 667 Path transmissions, 535, 538 Payback period method, 612, 708 PDGS-FAST system, 178 P diagrams, 299 in DFSS (design for six sigma), 715, 719 in FMEA (failure modes and effects analysis), 258–260 and team systems, 26 PDS (product design specifications), 717 P/E (price/earnings) ratio, 698
Peacemakers, in team systems, 25 Penalty functions, 185 Penetrant dye tests, 530 Pension liabilities, 667 Perceived quality, 113 Perception, 113 Perfect products, 194–196 Performance, 112 vs. importance, 131 index of, 558–559 needs, 229 parameters, 292 product-based view, 113 quality of, 69, 70 reviews of, 19 Performance evaluation review technique (PERT), 604 Period costs, 704 Periodic actions, 549 Periodicals, 145 Perishable tooling, 363 Personal computers, 195 Personnel in business assessments, 139 as measure of quality cost, 499 PERT (performance evaluation review technique), 604 PFIS (plant floor information system), 363 Philip Morris, 99 Phosphate-based liquid, 89–94 Phosphate-free liquid, 89–94 Phosphate-free powder, 89–94 Physical assets, 681–682 depreciation of, 684 inventories as, 682–683 operating leverage, 682 PIMS (Profit Impact of Marketing Strategies), 157 in benchmarking, 119 objectives and benefits of, 312 par report, 130 Pin joint clearance, 181 Planning matrix, 725 Plans up form (forecasting), 710 Plant administration, 504 Plant and equipment, 136, 701 Plant engineering, as measure of quality cost, 494–495 Plant floor information system (PFIS), 363 Plant reports, 480 Plasticity, 177, 181 Plug gages, 527 Plus-minus dimensioning, 523 Pneumatic gaging, 527 Point elements, 176 Poisson distribution, 316, 509–510, 636–640
SL316X_frame_INDEX_V. VI Page 625 Monday, September 30, 2002 8:17 PM
Index for Volume VI Poisson process, 635–636 Poka Yake method, 208, 274–275, 721 Portfolio analysis, 610 Positioning sensors, 218 Positive feedback, 35 Potential design verification tests, 258 Power supplies, 358 Practice gaps, 158 Predictive maintenance, 363 Pre-feasibility analysis, 38 Preference structure, 89 Preferred stocks, 669, 697 Preliminary design review, 466 Prentice Hall Almanac of Business and Industrial Statistics, 157 Prepaid costs, 704 Pre-planning matrix, 65 Preservation of knowledge, 74 Pressure, 527 Preventers, 216–217 Prevention costs, 482, 488 Preventive maintenance, 350, 363 Price/earnings (P/E) ratio, 698 Price-volume variance analysis, 707 Pricing, 119 and ROI (return on investment), 119 in theory of firm, 661 Primary reference standards, 525 Prime costs, 704 Principal, 685 Priorities, in FMEA (failure modes and effects analysis), 230 Prioritization matrix, 139–140 Proactive systems, in mistake proofing, 216 Probability density function, 618, 628 Probability distribution, 313, 636 Probability of configuration, 637–638 Probability of failure, 618–620 Probability of reliability, 621 Probability paper, 485 Probability ratio sequential testing (PRST), 363 Probes, 219 Process average shifts in, 1–2 short- vs. long-term standard variation in, 4–5 Process benchmarking, 122–123 Process characteristics (CTP), 510 Process Control Methods (book), 6 Process controls, 249, 268, 276 Process customers, 229 Process engineers, 269 Processes, 363 costs of, 571–572 dominance factors, 273 internal, 53
625 parameters of, 255 in partnering, 25 planning with project management, 607–608 in project management, 601–602 quality management in, 23 random vs. identifiable causes in, 133 short- vs. long-term variation in, 4–5 and six sigma, 1–2 special characteristics for, 257 standard deviation, 4–5 static vs. dynamic, 1–2 Process facilitators, in trade-off studies, 473 Process flow diagrams, 61–64, 234, 259 Process FMEA, 224–225 see also FMEA (failure modes and effects analysis) calculating RPN (risk priority number) in, 275 describing failure causes in, 272 describing failure effects in, 271–272 describing failure modes in, 270–271 describing process functions in, 269–270 detection table, 253 estimating detection of failure in, 274–275 estimating frequency of occurrence of failure in, 273 estimating severity of failures in, 273 failure modes, 242–244 forming teams for, 269 identifying manufacturing process controls in, 273–274 linkages to design FMEA and control plans, 258–260 objectives of, 268 occurrence rating, 250 recommending corrective actions in, 275–277 requirements for, 268–269 severity rating, 247 special characteristics for, 257 timing, 268 Process functions, 269–270 Process gaps, 158 Processing mistakes, 213–-214 Processing omissions, 214 Process performance (Cpk), 2, 402–403 Process plans, 72, 201 Process quality, 23, 25 Process redesign, 511–512 Procrustes (Greek mythology), 29–30 Procurement, 10 Producers, 53 Producers’ risk, 313 Product assurance, as measure of quality cost, 499 Product-based view, 113 Product characteristic deployment matrix, 72
SL316X_frame_INDEX_V. VI Page 626 Monday, September 30, 2002 8:17 PM
626
Six Sigma and Beyond: The Implementation Process
Product control, as measure of quality cost, 499–500 Product costs, 704 Product demand, and competition, 662 Product design and development, 194 basic vs. secondary processes in, 204 benefits of DFM/DFA (design for manufacturability/assembly) on, 189 case studies, 195–196 as cause of product failures, 291 and costs of engineering changes, 297–298 crash program approach to, 195 and customer satisfaction, 113–114 effects of DFM/DFA (design for manufacturability/assembly) on, 204 factors affecting manufacturing process, 197 focus of, 205 forming and sizing operations in, 204 functions of, 264 fundamentals of, 204 map guide to, 197 as measure in TOC (theory of constraints), 462 minimum performance requirements in, 198 perfect product approach to, 196 primary process in, 204 and product life cycle, 297–298 as product plan, 196–198 QFD (quality function deployment) in, 79–80, 86–88 reducing cost of, 189 reducing risks in, 267 reducing time for, 158 reliability in, 296–297 secondary process, 204 sequential approach to, 191 simultaneous approach to, 191 six sigma philosophy, 5–7 special characteristics for, 257 steps in, 295–296 Taguchi's approach to, 371 TDP (technology deployment process) in, 298–300 Product design specifications (PDS), 717 Product failures, 288, 290 Product flow diagrams, 56–61 Production, 364 costs of, 704 establishing conditions for, 725–726 mass production, 726 as measure of quality cost, 500 requirements in, 87–88 and team systems, 26 Production accelerated stress screen (PASS), 310–312
Productivity, 459–460 effects of customs and traditions on, 582 effects of habits on, 582 in theory of firm, 661 Product launching, 189 Product liability, 189, 491 Product life cycle, 133 and cost of engineering changes, 297–298 as a factor in product design, 194 and failure rate, 293–295 maturity period, 699–700 and product design, 297–298 stages of, 699–700 Product plans, 194, 196–198 Product quality, 112–117 eight dimensions of, 112–113 perception of, 117–119 and return on investment, 119 Product quality deployment, 73 Product recall, 491 Product reliability, see reliability Products, 364 characteristics of, 255 defects, 291 durability life, 289 environmental conditions profile, 289–290 expected customer life, 289 function diagrams for, 56–61 functions of, 264 life cycles of, 133 minimum performance requirements, 198 with multiple characteristics, 2–3 nonconforming, 3 non-price reasons in buying, 114 reliability numbers, 290 reliability of, 288 and sales forecasting, 710 Professional associations, 145, 147 Profilometers, 527 Profitability ratios, 693–695 Profit after tax (PAT), 693–694 Profit and loss statements, 667–668 Profit before tax, 693 Profit/equity ratio, 708–709 Profit Impact of Marketing Strategies, see PIMS Profit/investment ratio, 708–709 Profits, 570 analysis of, 704–707 in annual reports, 671 and axiomatic design, 545 calculating, 668–669 vs. cash, 678 in cash flow analysis, 701
SL316X_frame_INDEX_V. VI Page 627 Monday, September 30, 2002 8:17 PM
Index for Volume VI direction of, 671 maximizing, 661–662 as part of transactions, 672 planning, 710–712 and productivity, 459–460 rating, 695–696 and ROI (return on investment), 459–460 in theory of firm, 661–662 Program management, as measure of quality cost, 500 Progressive-stress testing, 306 Project decision analysis, 612–613 Project management, 599–601, 604 decision analysis, 612–613 in DFSS (design for six sigma), 605–609 generic seven-step approach to, 603–605 goal setting in, 608 key integrative processes in, 602 processes in, 601–602 and quality, 603 in six sigma, 605–609 succeeding in, 613–615 value in implementation process, 607–608 Projects, 599–601 completing, 605 describing, 603–604 justification and prioritization of, 610–613 planning, 604 planning team for, 604 risk factors, 612–613 scopes of, 565–567 selecting, 597–598 starting, 605 Proprietary information, in expanded partnering, 17 Prospectus, 146 Prototype programs, 296 Proximity detectors, 219 PRST (probability ratio sequential testing), 363 Publications, as measure of quality cost, 500 Public bids, 147 Pugh concept selection, 230–231 in design FMEA, 267–268 in DFSS (design for six sigma), 715 in process FMEA, 275–276 Pulse echo tests, 530 Purchasing conformance elements in, 505 as measure of quality cost, 501 nonconformance elements in, 505 non-price reasons in, 114 Purchasing agents, 54 Purchasing performance benchmarks, 157 Purchasing power, 54
627
Q QAA (qualitative assembly analysis), 199 QFD (quality function deployment), 53, 71–72 benefits of, 73–74 combining with Taylor's motion economy, 200–201 definition of, 73 in design FMEA, 267 development of, 87 in DFM/DFA (design for manufacturability/assembly), 199 in DFSS (design for six sigma), 715, 717–8 function concepts in, 64–68 intangible benefits of, 727 issues with, 75–76 key documents in, 72–73 methodology, 80–84 and planning, 84–86 in prioritizing benchmarking alternatives, 140–141 process management in, 727–730 process overview, 76 in product development process, 79–80, 86–88 project plan, 76–79 stages of, 725–726 summary value, 727 tangible benefits of, 727 terms associated with, 73 total development process in, 75 QOS (quality operating systems), 345–346 QS-9000 certification program, 42, 345 Qualitative assembly analysis (QAA), 199 Quality, 112–117 alternative definitions of, 112–113 basic, 68–69 costs of, see quality costs customer-driven, 107 definition of, 84–85 excitement, 69–71 improving with quality cost, 492 manufacturing-based view of, 114 as measure in TOC (theory of constraints), 462 and operational results, 23, 25 perceived, 113 perception of, 117–119 performance, 69 planning, 22, 24, 41, 102–103 product-based view of, 113 and product reliability, 291–295 and project management, 603 qualitative tool for measuring, 44 and return on investment, 119 and ROI (return on investment), 119 tables, 73
SL316X_frame_INDEX_V. VI Page 628 Monday, September 30, 2002 8:17 PM
628
Six Sigma and Beyond: The Implementation Process
transcendent view of, 113 user-based view of, 114 value-based view of, 114 Quality characteristics (CTQ), 510, 719 Quality control, 206 charts, 72, 478 conformance elements in, 507–508 as measure of quality cost, 501 nonconformance elements in, 508 in SQM (strategic quality management), 103 system, 478 Quality costs, 477–478 analyzing, 484–485 categories of, 481–482 components of, 481–483 concepts of, 480–481 conformance elements in, 502–509 data sources for, 487 and DFSS (design for six sigma), 509–510 improving quality with, 492 inputs, 481–481 inspecting, 487 laws of, 485 measuring, 483–484 nonconformance elements, 502–509 non-manufacturing measurements for, 492–502 optimizing, 483 outputs from, 482 presentation formats for, 485 product control as measure of, 499–500 quantifying, 482–483 tools of, 484 typical monthly report, 486 Quality defects, 291 Quality engineering in DFSS (design for six sigma), 25–33 in measuring methods, 483–484 Parameter Design approach in, 31–32 Quality engineers, 269 Quality failures, costs of, 101 Quality function deployment, see QFD Quality functions, 73 Quality operating systems (QOS), 345–346 Quality ratings, 487 Quality Systems Requirements, Tooling & Equipment (book), 345 Quantitative costs, 572–573, 572–573 Quantum leap-parallel programs, 196 Questionnaires, for evaluating partnering process, 17–19
R R&D (research and development), 137, 501
R&M (reliability and maintainability), 345 building and installing, 352–353 concepts, 349–350 bookshelf data stage of, 349 manufacturing process selection stage in, 350 preventive maintenance needs analysis stage in, 350 conversion/decommission of, 353 Department of Defense standards, 337–342 developing and designing, 350–352 and DFSS (design for six sigma), 364–365 implementing, 346–347 key definitions in, 362–364 objectives of, 346 operations and support of, 353 phases in, 346 plans, 364 sequence and timing of, 348–349 targets, 364 tools and measures, 347–348, 354–361 R/1000, 290 Radius, 522 Radius gages, 527 Random variable approach, 632 Random variables constant raised to power of, 653 division of, 651–652 exponential of, 652–653 functions of, 651 logarithm of, 653–654 powers of, 652 in systems failure analysis, 632 Taylor series of, 650 Random variations, 133 Random walk theory, 688 Ranking teams, 473–474, 477 Rate of change of failure (ROCOF), 294–295 Rate of growth, 663 Rate of occurrence, 425–426 Ratings, for partnering process, 17–19 Rating services, 147, 695 Ratio analysis, 688–691 coverage ratios, 692 earning ratios, 693–695 leverage ratios, 692 liquidity ratios, 691–692 return ratios, 693–695 Raw materials, 152–153, 487 Rayleigh distribution, 641 RCA (root cause analysis), 255, 364 R control charts, 274 Reactive systems, in mistake proofing, 216 Recall system, 526 Receivables, 681
SL316X_frame_INDEX_V. VI Page 629 Monday, September 30, 2002 8:17 PM
Index for Volume VI Reciprocating tube hopper feeders, 207 Redesign, 511–512 Reengineering, 511 conference method, 513–515 and DFSS (design for six sigma), 516–517 OOAD (object oriented analysis and design), 515–516 process redesign in, 511–512 restructuring approach, 512–513 Reference dimension, 522 Regression analysis, 711, 718 Regulatory requirements review, 259–260 Relationship matrix, 76, 82, 201 Relays, 358 Reliability, 287 block diagrams, 323–325 costs of unreliability, 296–297 and customer satisfaction, 292–293 definition of, 364 in design, 296–297 design, 313 as a dimension of product quality, 112 DOE (design of experiments) in applications, 335–336 environmental conditions profile, 289–290 of equipment, 292 exponential distribution in, 618 gamma distribution in, 627 growth, 364 growth plots, 361 and hazard functions, 634–635 improving through parameter design, 336–337 indicators of, 290 and maintainability, see R&M (reliability and maintainability) parallel, 323–325 probability of, 287–288, 621 and quality, 291–295 reliability numbers, 290 series, 323–325 specified, 312 specified conditions, 289 specified time period of, 288–289 system, 324 in TDP (technology deployment process), 298–300 visions in, 323 Reliability Analysis Center, 339–340 Reliability defects, 291 Reliability demonstration tests, 312–313 attributes tests, 313–314 fixed-sample tests, 314–316 operating characteristic curve, 313 sequential tests, 314, 317–318, 321–322 variables tests, 314, 318–320
629 Reliability function, 632–633 Reliability numbers, 290 Reliability point, 354 Reliability relationships, 632 Reliability standards, 338 Reliability tests, 300 accelerated testing, 305 objectives of, 301–302 planning, 301 sudden-death testing, 302–304 Rents, 668 Repair active repair time, 293 as internal failure cost, 490 planning, 41, 238 Replacement cost, 704 in calculating depreciation, 687 vs. current value, 680 Requirement analysis, 38 Research and development (R&D), 137, 501 Research centers, 145 Resource requirements, 151 Result gaps, 158 Result goals, 159 Retail Scan Data, 146 Retained earnings, 670 Return on assets, see ROA Return on assets managed (ROAM), 695 Return on equity, see ROE Return on gross assets (ROGA), 695 Return on investment. see ROI Return on net assets (RONA), 695 Return on net capital employed (RONCE), 694 Return on sales (ROS), 694 Return ratios, 693–695 Revenues, 668 in annual reports, 671 and costs, 711 in price-volume analysis, 707 rating, 696 Reverse engineering, 148 Revised RPN (risk priority number), 283–284 Revolving credit, 666 Revolving hook hopper feeders, 207 Rewards, 107 Rework, 41 Rigid links, 176 Rise time, 618 Risk priority number, see RPN Risks, 696 and axiomatic design, 546 calculating, 251–253 consumer’s risk, 313 and earnings, 693 in manufacturing, 195
SL316X_frame_INDEX_V. VI Page 630 Monday, September 30, 2002 8:17 PM
630
Six Sigma and Beyond: The Implementation Process
in product development, 195 in projects, 612–613 rating, 696 reducing, 267, 275–276 ROA (return on assets), 694 as measure in TOC (theory of constraints), 462 in modified duPont formula, 704–705 in project management, 610 ROAM (return on assets managed), 695 Robert Morris Associates Annual Statement Summary, 157 Robust designs, 543 Robust teams, see teams ROCOF (rate of change of failure), 294–295 ROE (return on equity), 694 calculating, 708–709 in modified duPont formula, 704–705 ROGA (return on gross assets), 695 ROI (return on investment), 693–694 average rate of return, 708–709 as measure in TOC (theory of constraints), 462 and net profits, 459–460 payback period method, 708 and pricing, 119 and productivity, 459–460 in project management, 610–611 and quality, 119 Roll forming, 204 Roman Catholic Church, 672 Rome Air Development Center (RADC), 469–470 Rome Laboratory, 340 RONA (return on net assets), 695 RONCE (return on net capital employed), 694 Roof crush, 177 Roof matrix, 201 Root cause analysis (RCA), 255, 364 ROS (return on sales), 694 Rotary centerboard hopper feeders, 207 Rotary disk feeders, 207 Royalties, 668 RPN (risk priority number), 251–253 calculating, 267, 275 and characteristic/root causes of failure, 276 in machinery FMEA, 282–283 revised, 283–284 Rust inhibitors/undercoatings, 289 Ryan Airlines, 554
S Saab, 296–297 Sabotage, 212 SAE J1730 standard, 245 Safety margins, 359–360 Safety regulations, 54
Sales in annual reports, 671 in balance sheets, 668 in benchmarking, 123 in breakeven analysis, 704–705 in business assessments, 135 as cause of product failures, 291 costs of, 703 factors affecting, 710 in financial comparison, 131 forecasting, 710 maximizing, 662 as measure in TOC (theory of constraints), 462 promoting, 157 recording, 675 return on sales (ROS), 694 statistical forecasts of, 711 in surveys, 118 in theory of firm, 662 trend in, 487 Sales goals form (forecasting), 710 Salt spray, 289 Salvage value, 687 Sample data approach, 632 Sample difference, 656 Sample space, 622–623, 632 Sampling, 170, 528 SAVE (Society of American Value Engineers), 556, 593 Savings potential, vs. time, 560 Scale parameter, 625, 641 Scales, 527 Scandinavian Airlines (SAS), benchmarking in, 125–126 Scatter plots, 484 Scheduling, in project management, 603, 604 Schools and universities, 148 Scraps, 41, 278, 490 Screening methods, 585–591 Seat belts, 177 Seating arrangements, in meetings, 33 Secondary functions, 557, 575 Secondary manufacturing process, 206 Securities, 671, 681 Security and business management, 663 as measure of quality cost, 501 Segmentation, 102, 549 Self loops, 538–539 Seminars, 147 Senior management as executive customer/supplier partner, 14 in expanded partnering, 23–24 Sensitivity analysis, 475–476 Sensors, 216–219
SL316X_frame_INDEX_V. VI Page 631 Monday, September 30, 2002 8:17 PM
Index for Volume VI Sequence resistors, 219 Sequential tests, 314 for binomial distribution, 317–318 graphical solutions, 318 using exponential distribution, 321–323 using Weibull and normal distributions in, 323 Sequential unconstrained minimization technique (SUMT), 185 Series reliability block diagrams, 323–324, 323–325 Serviceability, 112, 293 Service FMEA, 225 Services in business assessments, 136 and customer satisfaction, 49–51 data on, 282 delivery of, 49–51 hot lines for, 118 and non-price reasons in buying, 114–116 Servo transformers, 358 Sets, 184, 624–625 Setup mistakes, 214 Severity rating, 245–247 see also occurrence rating components of, 279 in design FMEA, 246 estimating, 265 and lowering risks, 267, 276 in process FMEA, 247 reducing, 253 Shape parameter, 625, 641, 643 Shareholder's equity, 667, 670–671 Shareholders, 663 Shewart cycle, 111–112 Shingo method, 208 Shipbuilding industry, 200 Shipping, as cause of product failures, 291 Shock, 289 Shock spectra, 179 Shoguns, in dealing with projects, 661 Short-term process variation, 4–5 Should-cost/total-cost models, 17 Signal factors, 393, 431–432 Signal flow graphs, 535–536 basic operations on, 538 effects of self loops on, 538–539 node absorption in, 539 rules of definitions of, 538 Signals, 27 Signal-to-noise (S/N) ratio, 393 calculating, 404 and loss function, 403–405 for LTB (larger-the-better) situations, 413 for NTB (nominal-the-best) situations, 413–415, 431, 439–441
631 for STB (smaller-the-better) situations, 412 in Taguchi approach, 370 Significant factors, effect of, 424 Simulated sampling, 170–175 Simulation, 169–170 in DFM/DFA (design for manufacturability/assembly), 199 and DFSS (design for six sigma), 185, 715 in sampling, 170–175 software for, 169–170 statistical modeling in, 485 in system and design controls, 266 as tool of quality cost, 485 Simultaneous engineering, 199, 364 Sine plates, 527 Singled station manufacturing, 207 Single-entry bookkeeping, 672 Site inspections, 148 Six sigma, 1 see also DFSS (design for six sigma) and benchmarking, 105–107 equation for, 4 in external manufacturing, 6–7 in internal manufacturing, 5–6 philosophy, 1, 5–7 and product design, 5–7 and project management, 608 short- vs. long- term process variation, 4–5 Six Sigma Mechanical Design Tolerancing (book), 5 Skills development, in partnering, 21 Slow assets, 666, 670 Slowness mistakes, 211 SMEs (subject matter experts), 78 Smith, Adam, 661–662 Social events, 147 Society of American Value Engineers (SAVE), 556, 593 Software, 6, 504–505 Software FMEA, 225 Solid elements, 176 Solid mechanics, 177 Solutions, in partnering, 19 Solver (Excel), 182 Source inspection, 217 Spare parts use growth curves, 485 Special interest books, 145 Special purpose elements, 176 Specified dimensions, 523 Specified reliability (Rs), 312, 318 Spherical radius, 522 Spline gages, 527 Spoilage, 36 Sport weld forces, 177 Springs, 176
SL316X_frame_INDEX_V. VI Page 632 Monday, September 30, 2002 8:17 PM
632
Six Sigma and Beyond: The Implementation Process
SQM (strategic quality management), 102–105 Squared deviations, 91–93 SS (sum of squares), 415–416 SSO (strategic standardization organization), 77 Stainless steel, 531 Standard's optimal inventory cycle, 707 Standard and Poor's, 145, 696–697 Standard cost, 478, 704 Standard deviation, 4–5, 91–93 Standard normal distribution, 647 Standards, 45 hierarchy of, 525 lack of, 211 Startup costs, 74 Startup losses, 278 State corporate filings, 147 Statement of changes, 669 Statement of condition, see balance sheets Statement of financial position, see balance sheets State variables, 183–185 Static process, vs. dynamic process, 1–2 Stationary hook hopper feeders, 207 Statistical analysis, 699, 711 Statistical modeling, 485 Statistical process control, 133 in monitoring team performance, 33 in process FMEA, 274 Statistical tolerancing, 721 Statistics for Experiments (book), 397 STB (smaller-the-better), 401, 412 Steel industry, 200 Steering wheels, 33 Step-stress testing, 306 Stockholders, 667 Stock markets, 688 Stocks, 670, 696–698 Stock size, 522 Stoppage, 278 Stoppers, 219 Storage, as cause of product failures, 291 Straight line depreciation, 685–686 Strain energy distribution, 177 Strain gages, 181 Strategic goals, 19 Strategic Planning Institute, 119 Strategic quality management (SQM), 102–5 Strategic quality planning, 22, 24 Strategic standardization organization (SSO), 77 Stratification charts, 484 Stress, 176 Stress contours, 177 Structural pressure, 128 Sub-customers, 54 Subject matter experts (SMEs), 78 Suboptimization, 35–36
Subsidiaries, 667 Substructuring, 179 Subsystem view, 238 Success factors in business, 129–130 Success testing, 316–317 Sudden-death testing, 302–304 Sumerian farmers, 672 Sum of squares (SS), 415–416 Sum-of-the-years' digits (SYD), 686 SUMT (sequential unconstrained minimization technique), 185 Sunk cost, 704 Supermarkets, 116–117 Supervision, as measure of quality cost, 501–502 Suppliers, 10 and benchmarking, 104 councils/teams for, 15 evaluating and selecting, 14 involvement in partnering, 15 partnering managers for, 14–15 in process FMEA, 269 roles in customer/supplier relationship, 13 Supply, factors affecting, 129 Supporting functions, 264 Surface elements, 176 Surface plates, 527 Surplus in capital, 670 Surprise mistakes, 211 Surrogate machinery FMEA, 279, 282 Surveillance equipment, 148 Surveys, 118–119 Survival function, 642 SYD (sum-of-the-years' digits), 686 System/concept FMEA, 262 System controls, 265–266 System customers, 229 System design, 371 System failure, 627–629 System feedback, 31 System FMEA, 224–225, 262 System initial design review, 466 System/part FMEA, 238 System reliability, 324 System requirements review, 465–466 Systems, 34–35 definition of, 34 in management, 35–37 Systems engineering, 34 definition of, 37 design synthesis in, 38–39 pre-feasibility analysis in, 38 requirement analysis in, 38 trade-off analysis in, 39 verification in, 39–40 System view, 238
SL316X_frame_INDEX_V. VI Page 633 Monday, September 30, 2002 8:17 PM
Index for Volume VI
T TABInputs, 602 TABOutputs, 602 TABTools and techniques, 602 Tactical goals, 19 Taguchi, G., 481 Taguchi model, 259, 266 vs. axiomatic design, 543 in determining causes of failures, 249 in DFSS (design for six sigma), 716 in DOE (design of experiments), 370–372 loss function in, 397–398 in product design, 371 in QFD (quality function deployment), 725 signals in, 26 Tandy Computer, 195 Tap sensors, 218 Task benchmarking, 122 Task functions, 58–59, 239, 264 Tasks, estimating, 604 Tax deductions, 685 Tax shelters, 702 Taylor’s motion economy, 200, 203 Taylor series, 644–649 partial derivatives, 649 of random variable functions, 650 in two-dimensions, 649–650 variance and covariance, 650–651 TDP (technology deployment process), 298–300 Team champions, in trade-off studies, 476 Teams, 26–27 aggressors in, 25 blockers in, 25 boundaries in, 29 conformance in, 29–30 cross-functional, 472 dealing with variations in, 30–31 in DFSS (design for six sigma), 25–26 environment, 28 external variations, 28 feedback to, 31 help-seekers in, 25 input, 27 internal variations, 29 minimizing effects of variations on, 31–32 monitoring performance of, 33 non-systems approaches to, 26 output/response, 28 peacemakers in, 25 ranking, 473–474, 477 signals, 27 system interrelationships in, 33 system structure of, 27–28 in trade-off studies, 472–473
633 Technical abstracts, 145 Technical axis, 79 Technical buyers, 117 Technical system expectations (TSE), 78 Technical targets, 78 Technology deployment process (TDP), 298–300 Technology forecasting, 156–157 Telecommunications industry, 6 Temperature, 29, 289, 526 Templates, 219 Terminus functions, 66–67 Test arrays, 387–389 Testing interpreting results of, 530 methods of, 529–530 reasons for, 529 Testing firms, 148 Test procedure errors, 526 Textbooks, 145 Text databases, 146 TGR/TGW (things gone right/wrong), 349, 364 Theory of constraints, see TOC Theory of firm, 661–662 Theory of non-constraints, 463–464 Thermal analysis, 357–359 Thermal conductivity, 358 Thermal expansion coefficient, 180 Thermal rise, 358 Thermal stresses, 177 Thermodynamics, 542 Things gone right/wrong (TGR/TGW), 349, 364 Thin plates, 176 Thomas Register, 146 Three-level factors, 385, 391, 392 Three-parameter Weibull distribution, 643 Throughput, 458 vs. costs, 461 obstacles to, 461–463 in TOC (theory of constraints), 463 Time, 525 Time interest earned ratios, 692 Time to total system failure, 627–628 Time-truncated tests, 319–320 TOC (theory of constraints), 457 five-step framework of, 464–465 foundation elements of, 463 goals of, 457–458 measurement focus in, 460–461 strategic measures, 458 vs. theory of non-constraints, 463–464 throughput vs. cost world in, 461 Tolerances, 523 bilateral, 523 cost of reducing, 448
SL316X_frame_INDEX_V. VI Page 634 Monday, September 30, 2002 8:17 PM
634
Six Sigma and Beyond: The Implementation Process
in DOE (design of experiments), 371, 447–454 impact of tightening, 449 Tolerance stack studies, 266 Tolerancing, 518–522 conventional, 518 in DFSS (design for six sigma), 716 geometric, 518 and six sigma, 1 statistical, 721 Tooling, 364 Tooling engineers, 269 Tools and equipment design of, 200 wrong and inadequate, 214 Toothpaste industry, 102 Torque, 527 Total cost, 570 Total development process, 75 Total productive maintenance (TPM), 362–3 Total quality management (TQM), 102–105 TPM (total productive maintenance), 362–363 TQM (total quality management), 102–105 Traceability, 39 Tractors, 206 Trade and Industry Index, 146 Trade associations, 145 Trade journals, 145 Trade-off studies, 470–471 checklist of, 476 conducting, 471–475 hypothetical example of, 89 matrix, 477 ranking methods in, 473–474 selection process in, 474 sensitivity analysis in, 475 standardized documentation in, 474 in systems engineering, 39 weighting rule, 475 Trade shows, 147 Traditional engineering, 468 Training, 110 Transactions, recording, 672, 675 Transcendent view, 113 Transfer functions, 51–52, 719 Transformations, 52–53, 396, 717–718 Trend charting, 133 Trial balance (accounting), 676 Triggering events, 150 Trimetrons, 218 TRIZ theory, 230 in design FMEA, 267–268 in DFSS (design for six sigma), 715 foundation of, 548 and innovation, 548 and levels of innovations, 549
principles associated with, 550 in process FMEA, 275–276 tools, 549 Tryout period, of product life cycle, 699–700 TSE (technical system expectations), 78 Tumbling barrel hopper feeders, 207 Tungsten carbide, 531 Two-level factors, 390–391 Two-station assembly lines, 170–175
U U.S. Army, 203 U.S. Navy, 555 Ultrasonic tests, 530 U-MASS method, 202–203 Uncoupled matrix, 657 Unemployment insurance, 702 Unequal bilateral tolerance, 523 Uniform Commerce Code filings, 147 Unilateral tolerance, 523 United Technologies, 54 University of Massachusetts, 202–203 Unreliability, cost of, 294 Useful life period, 293–294 User-based view, 114 User groups, 147 User value, 558 Use value, 558
V Vacation pay, 702 Vacuum, 527 Valuation methods, 679–681 current value, 680 historical cost, 680–681 intrinsic value, 680 investment value, 680 liquidation value, 680 psychic value, 680 replacement cost, 680 Value, 557–558 and historical costs, 678 in quality, 114 types of, 558 Value analysis, 555 in DFM/DFA (design for manufacturability/assembly), 199 in DFSS (design for six sigma), 715 function concepts in, 64–68 and transformational activities, 53 Value-based view, 114 Value chains, 54 Value concept, 556
SL316X_frame_INDEX_V. VI Page 635 Monday, September 30, 2002 8:17 PM
Index for Volume VI Value control, 553–555 developing alternatives in, 558–559 function analysis in, 573 functions in, 557 history of, 555 implementing, 559 job plans, 559–562 managing, 560 planned approach to, 556 techniques, 562–563 Value engineering, 555 attitudes in, 596 definition of, 556 developing plans in, 592–593 in DFM/DFA (design for manufacturability/assembly), 199 evaluating, 593 goals in, 592 in lowering costs, 581 project selection in, 597–598 selection methods in, 586 setting up organization in, 594–595 understanding principles of, 593–594 value council in, 596–597 Value Line Investment Survey, 697 Values and change management, 127 in goal setting, 160 Variable burden costs, 569 Variable costs, 704–705 Variables, 183–185 Variables tests, 314, 318–320 Variance, 480, 650–651 Variance of deflection, 654–655 Variations compensating for, 30–31 controlling, 30 external, 28 external variations, 28 internal, 29 minimizing effects of, 31–32 random, 133 system feedback, 31 Velcro, 101 Vendors, 10 Verification, in systems engineering, 39–40 Vertical integration, 10 Vibrations, 177, 289 Vibration sensors, 218 Vibratory bowl feeders, 207 Visual inspection, 529 VOC (voice of customer), 73, 83, 201 Voice mail, in customer/supplier communications, 13
635 Volkswagen, 169 Volvo, 296–297
W Wal-Mart Corp., 117 Warehouse operations, 153–154, 157 Warranties, 289 costs of, 101, 294, 297 data, 279 as external failure cost, 491 as measure in TOC (theory of constraints), 462 periods, 289 reducing, 74 Wealth of Nations (book), 661–662 Wear out period, 294 Weather, 289 Web sites, 13, 111–112 Weibull distribution, 640–643 in fixed-sample tests, 320 in plotting and analyzing failure data, 323–333 in sequential tests, 323 three-parameter, 643 using, 334–335 Weibull failure distribution, 642–643 Weibull hazard rate function, 643 Weibull probability density function, 640 Weibull reliability function, 642 Weibull scale parameter, 307 Weibull shape parameter, 307 Weight, 527 Weighted average method, 684 Weightings, 474–475, 477 Welding point indicators, 218 Westinghouse Electric Co., 203 Where to Find Business Information (book), 145 Willful mistakes, 211 Work breakdown structure, 604 defining in projects, 604 improving efficiency in, 199–200 stoppage of, 278 Working capital, 661 and cash flow, 702 format, 666 net changes in, 670 Working standards, 525 Work place, 200, 210 Writing, earliest evidence of, 672
X X-bar charts, 274
SL316X_frame_INDEX_V. VI Page 636 Monday, September 30, 2002 8:17 PM
636
Six Sigma and Beyond: The Implementation Process
Xerox, benchmarking programs in, 97, 108–109, 122, 124, 143 X-moving range charts, 274
Y Yearbooks, 145 Yellow pages, 145 Young’s modulus, 180
Z Zero-based budgeting, 712 Zero defects, 483 Zero-growth budgeting, 712 Z score, 699, 721 Z traps, 159