0470861630_01_pre01.fm Page iii Thursday, December 2, 2004 8:30 PM
Psychological Assessment in the Workplace A Manager’s Guide
Mark Cook and Barry Cripps
0470861630_01_pre01.fm Page ii Thursday, December 2, 2004 8:30 PM
0470861630_01_pre01.fm Page i Thursday, December 2, 2004 8:30 PM
Psychological Assessment in the Workplace
0470861630_01_pre01.fm Page ii Thursday, December 2, 2004 8:30 PM
0470861630_01_pre01.fm Page iii Thursday, December 2, 2004 8:30 PM
Psychological Assessment in the Workplace A Manager’s Guide
Mark Cook and Barry Cripps
0470861630_01_pre01.fm Page iv Thursday, December 2, 2004 8:30 PM
Copyright © 2005
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (+44) 1243 779777
Email (for orders and customer service enquiries):
[email protected] Visit our Home Page on www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to
[email protected], or faxed to ( + 44) 1243 770620. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The Publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 33 Park Road, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 22 Worcester Road, Etobicoke, Ontario, Canada M9W 1L1 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging in Publication Data Cook, Mark, 1942– Psychological assessment in the workplace:a manager’s guide / Mark Cook and Barry Cripps. p. cm. Includes index. ISBN 0-470-86159-2 (hbk.:acid-free paper) — ISBN 0-470-86163-0 (pbk.:acid-free paper) 1. Employees—Psychological testing. I. Cripps, Barry, 1938– II. Title. HF5548.8.C596 2005 658.3′001′ 9—dc22 2004018409 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN 0-470-86159-2 (hbk) ISBN 0-470-86163-0 (pbk) Typeset in 10/12pt Times by Integra Software Services Pvt. Ltd, Pondicherry, India Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire This book is printed on acid-free paper responsibly manufactured from sustainable forestry in which at least two trees are planted for each one used for paper production.
0470861630_01_pre01.fm Page v Thursday, December 2, 2004 8:30 PM
Contents
About the Authors
vii
Preface
ix
Chapter 1
Assessment in the Workplace
1
Chapter 2
Using Psychometric Tests
25
Chapter 3
Tests of Mental Ability
47
Chapter 4
Personality Tests
67
Chapter 5
Sifting and Screening
99
Chapter 6
References and Ratings
121
Chapter 7
Competence Analysis
139
Chapter 8
Assessment and Development Centres
159
Chapter 9
The Interview
185
Chapter 10
Structured Interviews
205
Chapter 11
Other Special Assessment Methods
223
Chapter 12
Using Assessment to Arrive at a Decision
245
Chapter 13
Workplace Counselling
271
Chapter 14
Performance Appraisal
281
Chapter 15
Training for Testing and Assessment
311
Chapter 16
Professional and Ethical Issues
319
Chapter 17
The Future of Assessment
329
References
341
Index
349
0470861630_01_pre01.fm Page vi Thursday, December 2, 2004 8:30 PM
0470861630_01_pre01.fm Page vii Thursday, December 2, 2004 8:30 PM
About the Authors
Mark Cook is a Chartered Occupational Psychologist. He completed his first degree and D.Phil. at Oxford University and has been a practitioner in occupational psychology and psychological assessment in the workplace since 1968. He directed the standardization of three important personality tests—California Psychological Inventory, Myers Briggs Type Indicator and FIRO-B—in the UK, and is a former Director of Oxford Psychologists Press, a leading UK test publisher. He has researched extensively on personnel selection and psychological testing, including psychological profiles of managers in formerly communist countries and health care workers, and is the author of Personnel Selection, 4th edn, published by John Wiley & Sons in 2003. Mark Cook teaches psychology of work at Swansea University. Barry Cripps is a Chartered Occupational Psychologist consulting independently in industry, business, Higher Education and sport. His interests are in organisational learning and development, assessment and coaching. Having served as a main board Director of Training, External Examiner in HR to top ranking Business Schools, and verifier of test competence in the British Psychological Society, Barry Cripps brings his extensive experience of people assessment in the workplace to this book. Barry Cripps’ practice is in Dartington, Devon.
0470861630_01_pre01.fm Page viii Thursday, December 2, 2004 8:30 PM
0470861630_01_pre01.fm Page ix Thursday, December 2, 2004 8:30 PM
Preface
We have both been engaged in psychological assessment in the workplace for some years, in a variety of sectors, and also have extensive experience of using psychological tests. We perceived a need for a book that would describe the best techniques for psychological assessment, in a form accessible to managers. Our account is intended to be research based and critical. There are some good techniques for psychological assessment available; unfortunately there is also quite a lot that is not really worth using. We aim to sort the wheat from the chaff, for the guidance of line and HR managers. Our style of writing is intended to move away from academic language to explain concepts in a language comprehensible to managers. In selecting research to describe, we have focused on ‘real’ workers and ‘real’ managers, and have avoided citing research that uses college students to simulate managers or other workers, or uses laboratory simulations of work. We would like to thank the many people who have helped us prepare this book. First, we would like to thank the many researchers in the selection area who have generously sent us accounts of research in press or in progress. Second, we would like to thank the managers and students on training courses whose questions and comments have helped us write this book. Third, we would like to thank Karen Howard for her help with the illustrations used. Finally, we would like to thank the Psychology team at John Wiley & Sons for their support and help. Mark Cook Barry Cripps
0470861630_01_pre01.fm Page x Thursday, December 2, 2004 8:30 PM
0470861630_02_cha01.fm Page 1 Thursday, December 2, 2004 8:31 PM
CHAPTER 1
Assessment in the Workplace
OVERVIEW • • • • • • • • • •
Nine general headings in what is assessed. A preliminary list of ways of assessing people. An overview of what is assessed, and how. How assessments should themselves be assessed. The concepts of consistency (or reliability) and accuracy (or validity) in assessment. The correlation and what it means. Outlining some reasons why validity of assessment is sometimes low. The concepts of fairness and adverse impact in assessment. How to identify good work performance. How selection is currently practised in the UK, the USA and Europe.
INTRODUCTION Consider the ideal employee, from the employer’s point of view. Mr/Ms Ideal is never late, never ill, never argues, never refuses to undertake an assigned task, doesn’t join a union, so doesn’t ‘go slow’ or ‘work to rule’ and certainly never goes on strike. Mr/Ms Ideal works 100% of the time, always does every thing exactly right, but also does everything very quickly. Where will the employer find such a paragon? In the showroom of the nearest industrial robot supplier. But suppose we need a human Mr/Ms Ideal. How do we set about finding him/her? Think of someone in your organisation whom you consider is a real liability and ask yourself: • Why is he/she a liability? • What does he/she do to make you think that?
0470861630_02_cha01.fm Page 2 Thursday, December 2, 2004 8:31 PM
2
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• How did we come to employ him/her in the first place? • What selection assessments did we make? Now let us take a more positive slant. Think of someone who is a real asset, the sort of person you would definitely like more of, and ask yourself: • • • •
Why is he/she an asset? What does he/she do to make you think that? How did we come to employ him/her in the first place? What selection assessments did we make?
Helping you get these decisions right is what the rest of this book is about. There are four main issues: • • • •
Why we assess. What to assess. How to assess it. How to assess the assessment.
WHY WE ASSESS We can distinguish four main reasons for assessing people in the workplace: • • • •
selection promotion downsizing formal appraisal.
Selection is difficult. We are trying to predict how well someone will work, over a period of perhaps 10 years, on the basis of the information we can collect in a period lasting from 30 minutes to at most 3 days. We are limited by ethics, the law and the natural desire of applicants to present themselves in the best light. Promotion and downsizing are logically the same as selection: we compare what we need with what people have to offer. The main difference is we have— or should have—much more, and much better, information. If someone has been doing the job for 5 years, we should have a pretty clear idea of what they are like and what they can do. Downsizing can be trickier than promotion for obvious reasons. These days most organisations in North America and the UK have performance appraisal systems. Employees are formally assessed at regular intervals, and the assessments used either to plan their training and development, or to determine their pay and promotion.
0470861630_02_cha01.fm Page 3 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
3
WHAT TO ASSESS The short answer to this is: ability to do the job. A more detailed answer is provided by a competence analysis, which will list the main competencies that successful employees need (see Chapter 7). We give here a general list of the main headings for assessing staff: • • • • • • • • •
knowledge work skills social skills organisational fit required work behaviours physical characteristics mental ability personality interests and values.
Assessing Staff: General Attributes Knowledge Every job requires some knowledge. It may be high-level expertise, such as knowledge of current employment law, of human anatomy and physiology, or of aerodynamics and materials science. Or it may be something very limited and basic that can be acquired in a few minutes: where to find the broom and what to do with the rubbish when you have swept it up. Sometimes the knowledge element is implicit: employers tend to assume applicants can read and write, use telephones, give change etc. Knowledge can be acquired by training; so need not necessarily be a selection requirement. Mastery of higher level knowledge may require higher level mental ability.
Work Skills A skill is the ability to do something quickly and efficiently: bricklaying, heart surgery, driving a vehicle, valuing a property etc. Skills can mostly only be acquired by practice. Reading a book on driving does not enable you to drive, nor does watching someone else drive. Employers sometimes select for skills, and sometimes train for them. Mastery of some skills may require levels of mental or physical ability not everyone has. Some approaches to selection aim to assess only knowledge and skill(s), because they are very specific, fairly easy to assess and generally uncontroversial. Some psychologists and some managers see limitations in this, because some jobs change so fast that specific knowledge and skills may soon become out of date.
0470861630_02_cha01.fm Page 4 Thursday, December 2, 2004 8:31 PM
4
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Social Skills Social skills are important for many jobs, and essential for some. Here is a selection of words and phrases from job adverts: good communicator, gets on well with others, persuasive, able to control or influence people, teamwork, leadership. These skills are also sometimes called social intelligence, or currently emotional intelligence.
Organisational Fit The term ‘organisational fit’ refers to whether the applicant’s outlook or behaviour matches the organisation’s requirements. These can be explicit: soldiers are expected to obey orders instantly and without question. Fit is often implicit: the applicant does not sound or look ‘right for us’, but there is no written list of requirements, or even a list that selectors can explain to you.
Required Work Behaviours Examples of such behaviours are punctuality, health (avoiding) absence, theft, disorder, and drink and drug abuse. These are often overlooked by psychologists and competence analysts, but all employers need people who come to work on time, and who do not steal, start fights etc. Most organisations also want people who believe that what the organisation does is useful and important, or who at least behave as if they do.
Physical Characteristics Some jobs need specific physical abilities: strength, endurance, dexterity. Others have more implicit requirements for height or appearance. Managers are sometimes suspicious of formal competence or job analyses, because they seem to leave important things out. Formal qualification in HRM, they argue, is not the whole picture when selecting a good HR manager. Psychologists like to think that what is left out is ability, personality, interest and values, which psychological tests can assess. We should always be careful that ‘what’s left out’ does not turn out to be ‘people like me’ or ‘white males like me’. There is always the risk that something you cannot articulate very clearly might be something you are not allowed to select for, for example, gender.
Mental Ability Mental ability divides into general mental ability (or ‘intelligence’), and more specific applied mental skills, for example problem solving, practical judgement, clerical ability, mechanical comprehension etc.
0470861630_02_cha01.fm Page 5 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
5
Personality ‘Personality’ is a word that means different things to different people (discussed in more detail in Chapter 4). Psychologists tend to use it to describe from 5 to 30 underlying dispositions, or personality traits, to think, feel and behave in particular ways. An extravert person likes meeting people, feels at ease meeting strangers etc. An anxious person worries a lot, may cope with stress or pressure less well etc. Personality traits are thought to be fairly deep-seated parts of the person, that underlie many aspects of their behaviour, and which might be difficult to alter. The employer will probably find it easier to select someone who is very outgoing to sell insurance, rather than trying to train someone who is presently rather shy.
Interests, Values and Preferences Someone who thinks drinking alcohol is wrong will normally not seek out work in a bar; someone who wants to help others may find social work or religious ministry more rewarding than selling potato crisps; someone who believes that people should obey all the rules all the time may enjoy being a traffic warden. People often have to take work that does not match their ideals and values, but work that does may prove more rewarding. Applicants have sometimes already assessed themselves in this area, if they have sought career counselling.
Explicit versus Implicit Assessment Some attributes are very explicitly required, especially knowledge and skill. There is no point applying for a post as a doctor without a medical degree. Other requirements are more implicit, sometimes barely even articulated: personality, interest and values, organisational fit, sometimes aspects of physical appearance.
Specific versus General Some requirements are very specific, e.g. skill in comptometer operation. Other are much broader, e.g. adaptable, flexible or outgoing. Very specific skills are easy to assess, or can be trained for if lacking. The assessment is accurate, simple and creates few legal problems. Assessing broader requirements like adaptability or calmness tends to involve psychologists and tests, is often much less accurate, and creates more legal and PR issues. Our example illustrates the problem with very specific requirements: who uses comptometer operators today? It is an obsolete skill. In a time of rapid change we may need to assess broader underlying characteristics.
0470861630_02_cha01.fm Page 6 Thursday, December 2, 2004 8:31 PM
6
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Surface specific
Bricklaying
More general underlying
Strength
Dexterity
Figure 1.1 Underlying attributes or specific skills
Underlying Attributes versus Surface Skills This issue is linked to the previous one. We can assess surface knowledge and skills, such as bricklaying or knowledge of employment law, or we can assess characteristics that underlie the ability to learn to lay bricks, such as strength and manual dexterity, or to learn employment law, such as general and verbal mental ability (Figure 1.1).
Recommendation: Ask yourself what assessments of staff you currently make, under each of the nine headings: knowledge, work skills, social skills, organisational fit, required work behaviour, physical characteristics, mental ability, personality, interests and values. Are these assessments explicit or implicit? Do you assess anything that our list has left out?
HOW TO ASSESS Traditional Selection Methods Figure 1.2 summarises the traditional approach to selecting staff in the UK and the USA. The advertisement attracts applicants, who return an application form. Applicants with satisfactory references are shortlisted and invited for interview. The employer seeks to attract a large pool of applicants, then pass them through a series of filters, until the number of surviving candidates equals the number of vacancies. The traditional filters are: • application form • letter of reference • interview. It became apparent early in the twentieth century that these traditional methods are not always very effective, starting a search for something better. Newer methods include: • psychological tests, of mental ability, personality, interests and values etc.; • work sample tests, such as typing tests or bricklaying tests;
0470861630_02_cha01.fm Page 7 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
7
ADVERTISEMENT
APPLICANTS Reject Consider further
REFERENCES Reject Consider further
INTERVIEW Reject Select
Figure 1.2 Successive stages in selecting staff
• biodata, or biographical questionnaires, which look for common themes in the background of, e.g. successful salespersons; • structured interviews; • assessment centres, which include group exercises and discussions, as written assessments. Most ‘new’ methods have actually been around a long time. Work samples can be traced back to the Boston, Massachusetts streetcar system in 1913; the first personality questionnaire dates from 1917; assessment centres were first used during World War II. The only significant major new development in the past 25 years is the structured interview. Perhaps we have already found every possible way of assessing people that exists, or could exist. We will return to this question in Chapter 17.
Recommendation: Ask yourself which methods of assessing staff you currently use. Distinguish if necessary between different types and level of job. Do you use any assessment methods that we have not mentioned?
What × How We have a list of things we want to assess, and another list of ways to assess them. Different methods are used for different aspects of applicants. Table 1.1 tries to indicate which methods are used for assessing what aspects. An X means that assessment is a main way of assessing that aspect: for example, we assess social skills by interview or assessment, and mental ability by test. We cannot assess social skills
0470861630_02_cha01.fm Page 8 Thursday, December 2, 2004 8:31 PM
8
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 1.1 How to assess main employee characteristics What
How Q AF R I
Knowledge X X Work skills X X Social skills Organisational fit Required work behaviours Physical ability Mental ability Personality Interests and values
X X (x) X X
SI AT PI B AVI WS AC PR Other
X X U X X X X X (x) (x)
(x) X U X U X
(x) (x) (x) X (x) (x) (x) (x) (x) (x)
(x) X (x) (x) X
(x) (x) X X X X HR records (x) Physical test (x) (x) (x) (x) (x) (x)
Notes: Q, Qualifications; AF, Application Form; R, Reference; I, Interview; SI, Structured Interview; AT, Ability Test; PI, Personality Inventory; B, Biodata; AVI, Attitudes and Values Inventory; WS, Work Sample; AC, Assessment Centre; PR, Peer Rating.
from the application form, nor mental ability by worksample test. An (x) in Table 1.1 means the aspect can be assessed by that method, but in more incidental way: personality questionnnaires may provide some information about social skill. A U in Table 1.1 indicates that the assessment may assess, or try to assess, the underlying basis of that aspect. Personality tests might pick out the sort of people who will be absent a lot, or drink too much or otherwise be deficient in required work behaviour. One thing that leaps out from Table 1.1 is the shortage of Xs for some aspects of applicants. There are not very many core ways of assessing social skills, or required work behaviours, or interests and values.
Recommendation: Draw up your own version of Table 1.1, indicating how you assess, and what you assess.
HOW TO ASSESS THE ASSESSMENT Assessment methods need to be assessed against six main criteria. An assessment should be: • • • • •
reliable, meaning it gives a consistent account of the person being assessed; valid, meaning it selects good applicants and rejects bad ones; fair, meaning it complies with equal opportunities legislation; acceptable, to applicants as well as to the organisation; cost-effective, meaning the assessment saves the organisation more than it costs to use; • easy to use, meaning it fits into the selection process, and allows the selectors to appear organised and prepared to the applicants.
0470861630_02_cha01.fm Page 9 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
9
Selection methods do not automatically possess all these qualities; we need research to tell us which possess what. Few assessment methods meet all six criteria so our choice of assessment is always a compromise. We will return to this issue in a later chapter, after reviewing assessment methods in turn.
RELIABILITY: DOES THE ASSESSMENT GIVE CONSISTENT RESULTS? Reliability means consistency. Physical measurements, e.g. the dimensions of a piece of a furniture, are usually so reliable their consistency is taken for granted. Selection assessments often are not so consistent. At their worst they may be so inconsistent that they convey little or no information. The Rorschach test asks people what they see in inkblots, and tries to interpret this as evidence of personality, especially conflicts and defences. The Rorschach is not often used in the UK because it is very unreliable, in two ways: • what a person sees in the blot on Monday may be quite different from what they see on Wednesday; • what one expert says about what the person’s Rorschach means may be quite different to what a second expert says. A test that gives inconsistent results from day to day, and that you cannot agree about is not very useful. Unfortunately there are a lot of assessments about that do fail these two simple tests. There are several types of reliability in general use: • Re-test reliability is calculated by comparing two sets of information—Rorschach interpretations, or interview ratings, or IQs—obtained from the same set of people, on two separate occasions, typically a month or so apart. If the test is assessing some enduring aspect of the person, as selection assessments are intended to, the two sets of information should be fairly similar. • Inter-rater reliability is calculated by comparing two sets of ratings given by two assessors to a set of applicants they have both interviewed, or observed in a group discussion, or written a reference for. If the assessors do not agree, one at least of them must be wrong, but which?
Recommendation: Ask yourself what information you have about the reliability of the assessments you presently make of staff.
CORRELATION Selection research uses correlation very extensively. If you are familiar with correlation, you can skip this section. Correlation is the extent to which two characteristics
0470861630_02_cha01.fm Page 10 Thursday, December 2, 2004 8:31 PM
10
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE Tall Tall thin people
9 8 7
XX
XX
X
X
XX
XX
XX XX
X
X
XX
XX XX
XX XX
XX
XX
6
X
XX
XXX XX
XXX XXX
XX XX
XX
5
XX
XX XX
XXX XXX
XXX XX
XX
X
X
Height
4
XX
XX
XX XX
XX XX
XX
3
X
XX
XX
X
2
X
XX XX XX
XX
2
3
4
Short fat people
Short 1
Light
5
6 Weight
7
8
9
10 Heavy
Figure 1.3 Height plotted against weight, showing a positive correlation of 0.75
go together. In selection, we are usually looking at the correlation between the assessment and work performance, but let’s first look at a more concrete example. Height and weight are correlated: tall people usually weigh more than short people, and heavy people are usually taller than light people. However, height and weight are not perfectly correlated—there are plenty of short fat and tall thin exceptions to the rule. Figure 1.3 shows height plotted against weight, for 100 people in their twenties. The correlation summarises how closely two measures like height and weight go together. A perfect one-to-one correlation gives a value of +1.00. Height and weight correlate about 0.75, meaning a strong relationship, but not perfect.
Zero Correlation If two measures are completely unrelated, the correlation is zero—0.00. For example, we know that extraversion and conscientiousness are not correlated. Knowing how extravert someone is will tell us nothing about how conscientious they are, whereas knowing someone’s weight gives us a generally fairly good idea of their likely height.
Weak Correlations In selection, we are looking at the correlation between selection assessment and work performance. It would be nice if we could find selection methods that correlated 0.75 with work performance, like the results in Figure 1.3. Unfortunately, most selection
0470861630_02_cha01.fm Page 11 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
11
High 9 8
Interview rating
Poor applicants selected
X
X
X
X
X
XX
XX
XX
XXX X
7
X
XX
XXX
XXX
XXX
XXX X
6
XX
XXX XXX
XXXX
XXX
XX
5
X XX
XX
XXXX
XXX
XXX
XX
4
X XX
XX
XXXX
XXX
XX
X
3
X XXX XX
XX
XX
X
X
2
X
X
X
3
4
5
Good applicants missed
Low 1 Poor
2
X
6
7
Work performance
8
9
10 Good
Figure 1.4 Interview rating plotted against work performance, showing a correlation of 0.30
methods do not work that well, and we very rarely find a link as close as that between height and weight. Figure 1.4 shows the sort of results that are more typical. Overall there is some relationship between, e.g. interview rating and work performance, but there are a lot of cases of good performers being rejected and poor performers being accepted. Figure 1.4 shows a correlation of 0.30, which is fairly typical of what you find when comparing selection rating with work performance ratings. Note how many good applicants are being rejected, and how many poor applicants are being accepted.
VALIDITY: IS THE ASSESSMENT ACCURATE? Validity means accuracy. A valid selection method accepts good applicants and rejects poor ones. This is clearly the most important thing to check for in any assessment. There is little point using an assessment method that is completely inaccurate. Unfortunately most ways of assessing staff are less than perfectly accurate, and many are so inaccurate they convey little or no useful information.
Validation Research The basic building block of selection research is the validation study. A validation study collects two sets of information, predictor and criterion:
0470861630_02_cha01.fm Page 12 Thursday, December 2, 2004 8:31 PM
12
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 1.2 The information we need to calculate the validity or accuracy of a selection assessment Employee number
Name
Interview rating
Performance appraisal rating
1 2 3 4 ⯗ 100
JS MM BK SM
4 3 1 2
4 4 2 3
AB
3
4
• the predictor is the assessment: interview, test, reference etc.; • the criterion is some index of work performance. Let us take the example of the interview. Interviewer ratings form the predictor, while the criterion is a performance appraisal rating, after one year on the job. Table 1.2 shows the information we need to collect. We have 100 people to study in this example. We then calculate a correlation between predictor (interview rating) and criterion (appraisal rating), which is the validity coefficient. In our example, this gives 0.32, meaning there is some link between interview rating and performance appraisal—in other words, that the interview is working, and is a valid selection method. Note that the interviewer’s opinion of the candidates has to be quantified, typically as a rating. You can correlate interview ratings, but not interview impressions, such as ‘good chap’ or ‘I didn’t think much of him’. For some types of assessment, hundreds, even thousands, of validation studies have been carried out. Work psychologists summarise these to find how well each assessment does on average, and whether it does better or worse in particular circumstances (for example, with particular sorts of work or particular types of people). Basic validation research is fairly easy to follow: the bigger the correlation, the better. Two linked aspects of validation—construct validity and incremental validity—are a little more complicated, but very important for any one planning a selection programme. Both concern what an assessment is actually assessing, and how it relates to other assessments.
Recommendation: Ask yourself what information you have about the validity of the assessments you presently make of staff.
Construct Validity When a new selection system is devised, people sometimes ask themselves: ‘What is this assessing?’ ‘What sort of person will get a good mark from it?’ One answer to
0470861630_02_cha01.fm Page 13 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
13
this question should always be ‘People who will do the job well’. But it is worth going a bit deeper and trying to get some picture of what particular aspects of the applicant the test is assessing: abilities, personality, social background, specific skills etc. The technical name for this is construct validity. There are several reasons why it is important to explore construct validity: • If your new test is mostly assessing personality, and you already use a personality test, you may well find that your new test is not adding much to the personality test. Your new test may not be called a ‘personality test’; it may be labelled ‘emotional intelligence’ or ‘sales aptitude’. • If your 2-day assessment centre measures the same as a 30-minute ability test, it would be much cheaper to use the 30-minute test. • If applicants complain about your selection methods, and you have to defend them in court or tribunal, you may want to be able to say exactly what your test assesses, and what it does not. You may be made to look very silly if you cannot! • If you understand what you are doing, you can often devise ways of doing it better. • If your new test turns out to be mostly assessing intellectual ability, you will be alerted to the possibility of adverse impact on certain groups. Construct validity is usually assessed by comparing one selection method, e.g. interview ratings, with other methods, e.g. psychological tests, by calculating correlations. This is fairly easy to do if you have an ongoing assessment programme, and you keep good records. Construct validity tells you what a particular method is actually assessing (which is not necessarily that same as what it is intended to assess). For example, the traditional interview turns out to be assessing mental ability to a surprising extent.
Recommendation: Ask yourself what information you have about what the assessments you presently make of staff are actually measuring.
Incremental Validity Suppose you are using an interview and a mental ability test to select office workers. Someone sells you an expensive computer simulation of key office skills. The people who sell the simulation produce lots of evidence that it predicts good office work very well. After a year or so, management start to complain that the expected improvement in the quality of office workers has not materialised. Did the people who sell you the test mislead you about its validity? No: the simulation does predict good work, but so do the tests you were already using. The new simulation covers exactly the same ground as your existing interview and mental ability test, so it does not improve your overall selection procedure at all. The technical term for this is incremental validity. This problem always arises when sets of selection tests are used. You can never assume that the validities of all the tests in the set can be simply added, to give overall
0470861630_02_cha01.fm Page 14 Thursday, December 2, 2004 8:31 PM
14
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
validity. Many selection tests cover the same ground, so adding new tests often does not improve overall validity. If you use two equally valid tests, there are three possible outcomes: • both tests together do twice as well as either alone; • both tests together do a little better than either alone; • the second test adds nothing at all. All three outcomes are possible, but a little better is probably the most likely. The fact that tests have different names or claim to be measuring differing aspects of employee effectiveness does not mean they will not turn out to be duplicating each other’s efforts. This problem is now being analysed and researched more and more, so advice can be given on combinations of tests that are likely to improve accuracy, and combinations of tests that, while effective in their own right, will not build on each at all.
REASONS FOR POOR VALIDITY Why do selection assessments sometimes fail to give good results? • Low reliability. An assessment that does not succeed in measuring much cannot predict much either. Low reliability can sometimes be improved by training, in e.g. interviewing, or by being careful to use the assessment correctly. • Imperfect measures of work performance. We calculate a correlation between the selection assessment and some index of work performance. The most frequent index is performance appraisal rating. Managers who do these will know why they can sometimes be poor reflections of true performance. Some organisations even today lack clear ideas about good and poor work performance, so cannot define it accurately. If we have defective work performance measures, we cannot achieve valid selection. If we do not know what we are looking for, we cannot know if we have found it. For this reason, the organisation should make sure it has an effective performance appraisal system, or some other way of assessing peoples’ work. If necessary, the organisation should devote some time to clarifying what it exists to achieve, and how it could be certain it has been achieved, and who by. • Selection is not the whole story. Employees’ work performance is influenced by many other factors, besides those characteristics we can assess during selection. Employees’ performance is affected by how they are managed, by the organisation’s culture, by the employment market and by the economy as whole. • Impression management or faking good. Selectors expect applicants to present themselves well. One sometimes hears interviewers complain that an applicant ‘didn’t make much of an effort’ (to describe his/her achievement and potential at sufficient length and with sufficient enthusiasm). Presenting oneself well unfortunately often shades into exaggeration, and from there into lying. Many selection methods rely on information the applicant provides, and often have no way of
0470861630_02_cha01.fm Page 15 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
15
verifying it. We will come across this limitation quite frequently in our more detailed review of selection assessments. • Quality of information. Table 1.3 is Table 1.1 again, but with some indication of the nature of the information each assessment provides. This divides into: – self: information provided by the applicant, on the application form, in the interview, and when answering questionnaires. – reported: information provided by other people about the applicant, principally through the reference, but also in ratings of him/her. – recorded: the applicant has obtained a qualification, which indicates he/she has knowledge or a skill. – demonstrated: the applicant performs a task or demonstrates a skill, in a work sample, or on a test with right and wrong answers. Assessments that demonstrate a skill are clearly better evidence than ones where the applicant claims to possess it. If someone says he/she can use a computer, we may doubt him/her, but if they succeed in doing specified tasks with it, there is no room for doubt. We noted previously that some of our nine main headings of things to assess were short of good means of assessment. Table 1.3 adds a further problem. There is a far greater shortage of assessments that clearly demonstrate that quality, as opposed to relying on someone’s word for it—usually the applicant’s. Only knowledge, skill and mental and physical ability can be assessed by demonstration.
Table 1.3 How to assess main employee characteristics, indicating the nature of the information obtained What
How Q AF R
I
SI
Knowledge rec self rep S/D S/D Work skills rec self rep self self Social skills (rep) S/D S/D Organisational fit rep (self) (self) Required work rep self self behaviours
AT
PI
B
AVI WS AC
PR
Other
(rep) (rep) dem rep self rep self rep HR records: rec dem (rep) Physical test: dem (dem) (rep) (dem) (rep) self rep dem
(self) (self) self self self
Physical ability
(rep) (self) (self)
Mental ability Personality Interests and values
(rep) (self) (self) dem (rep) (self) (self) self (self) (rep) (self) (self) (self)
Notes: Q, Qualifications; AF, Application Form; R, Reference; I, Interview; SI, Structured Interview; AT, Ability Test; PI, Personality Inventory; B, Biodata; AVI, Attitudes and Values Inventory; WS, Work Sample; AC, Assessment Centre; PR, Peer Rating. rec, recorded; rep, reported by others; self, reported by self; dem, demonstrated; S/D, reported by self and/or demonstrated.
0470861630_02_cha01.fm Page 16 Thursday, December 2, 2004 8:31 PM
16
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
FAIRNESS: WILL THE ASSESSMENT CREATE ADVERSE IMPACT? Fairness in selection means conforming with equal opportunities laws and codes of conduct. All developed countries have these. The USA led the way with the Civil Rights Act (CRA) of 1964, which prohibited discrimination in employment on grounds of race, colour, religion, national origin or gender. Later laws in the USA brought in age and disability. Similar laws followed in the UK: the Race Relations Act (1976), the Sex Discrimination Act (1975) and the Disability Discrimination Act (1995). Figure 1.5 shows how fair employment laws work in the USA; other developed countries, including the UK, have followed the same general model and adopted many of the key concepts. • An assessment that excludes too many of a protected minority, such as non-white persons, is said to create adverse impact. • The employer can remove the adverse impact by quota hiring to ‘get the numbers right’, i.e. equal number of men and women, correct proportion of ethnic minorities etc. • Or else the employer can argue that the adverse impact is justified because the selection test is job related, meaning accurate or valid.
Choose an assessment
No No problem
ADVERSE IMPACT? Yes Yes
No problem
QUOTA HIRING? No No JOB RELATED? Yes Yes ANY ALTERNATIVE? No Test is fair
Figure 1.5 Stages in deciding whether a test is legally ‘fair’
Use it
0470861630_02_cha01.fm Page 17 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
17
• The employer who succeeds in proving the test job related faces one last hurdle— proving that there is no alternative test that is equally valid but does not create adverse impact. Note that adverse impact is not what the layperson thinks of as ‘discrimination’. Adverse impact does not mean turning away minorities or women, to keep the job open for white males, or otherwise deliberately treating women or minorities differently. This is clearly unacceptable, and not likely to done by any major employer these days. Adverse impact means that the selection method results in more men or majority persons getting through than women or minority persons. Adverse impact means an employer can be proved guilty of discrimination, by setting standards that make no reference to race or gender, and that may seem well-established, ‘commonsense’ practice. • Height and strength tests for police, fire brigade and armed forces create adverse impact, because they exclude more women. • In Britain some employers sift out any applicants who have been unemployed for more than six months, in the belief they will have lost the habit of working. The (UK) Commission for Racial Equality (CRE) argues this creates adverse impact on ethnic minorities, who are more likely to be unemployed. • The important Griggs case in the USA ruled that high school diplomas and ability tests created adverse impact, because fewer African American applicants had diplomas or reached the pass mark set on the ability test. • The Green v. Missouri Pacific Railroad case in the USA showed that excluding applicants with criminal records created adverse impact because more non-whites than whites had criminal records. Adverse impact is very easy to prove; all you need is a breakdown of the workforce by gender and ethnicity. If there are too few women or minorities, there has been adverse impact. Table 1.4 shows the composition of the UK House of Commons after the 2002 election. There are clearly far ‘too few’ women and minority MPs. In the USA, the Uniform Guidelines introduced the four-fifths rule. If the number selected divided by the number of applicants for a protected minority is less than four-fifths of the highest ratio for any group, adverse impact applies. Table 1.4 shows how this would apply to the House of Commons: there are still ‘too few’ women and Table 1.4 Composition of the British House of Commons in 2002, actual and expected Actual Male Female White Minority
530 120 641 9
Expected
Expected (four-fifths rule)
325
260
39
31
Note: ‘Expected’ composition is based on the assumption that MPs are selected regardless of race and gender, and that 6% of the population belong to an ethnic minority.
0470861630_02_cha01.fm Page 18 Thursday, December 2, 2004 8:31 PM
18
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
minority MPs. (In fact the law does not apply to the House of Commons, because MPs are not classed as employees.) Adverse impact assesses the effect of selection, not the intentions of the people who use it. Adverse impact is a very serious matter for employers. It creates a presumption of discrimination, which the employer must disprove, possibly in court. This will cost a lot of time and money and may create damaging publicity. Selection methods that do not create adverse impact are therefore highly desirable, but unfortunately not always easy to find. If the employer is convinced that the assessment is necessary, even though it creates adverse impact, they must prove it is job related or valid. For example, a police force that thought all officers needed to be at least six feet tall would have to prove that shorter men and women were unable to do the job as well. Some American police forces tried to do this, without success. They were unable to demonstrate any relationship between height and any indicator of effective performance in police work. The risk of legal challenge, and possible poor publicity, means that the HR department often has to act defensively and may make our account of assessment sound a little negative or pessimistic in places. US law covers age discrimination, but British law does not—yet. The British government plans to comply with European Directives about age discrimination and introduce legislation by 2007. Britain has recently—December 2003—added religious belief and sexual orientation to the list of legally protected minorities.
Recommendation: Keep careful records of the gender and ethnicity of all the people you assess at work, and of everyone who applies for employment with you. Check these periodically to ensure that you are not selecting out too many of any group at any stage of the assessment process.
ACCEPTABILITY: WHAT WILL APPLICANTS THINK OF THE ASSESSMENT? Selection methods should be acceptable to applicants. In times of high unemployment, employers may feel they can ignore applicants’ reactions. In times of labour shortage, unpopular methods could drive applicants away. Applicants’ views of selection methods may influence their decision on whether to accept a job offer. A recent Belgian survey even reports that applicants who do not like an organisation’s assessment methods may stop buying its products (Stinglhamber et al., 1999). Surveys indicate that some assessment methods are more popular with applicants than others: they like interviews, work samples and assessment centres, but do not like biodata, peer assessment or personality tests. Applicants see more job-related approaches as fairer: simulations, interviews and more concrete ability tests (vocabulary, maths). They see personality tests, biodata and abstract ability tests (letter sets etc.) as less fair because they seem less job related. Preferences in the USA and France are generally similar, but personality tests and graphology are more
0470861630_02_cha01.fm Page 19 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
19
acceptable in France, although still not very popular. Research indicates that people like selection methods that are job related, but do not like being assessed on things they cannot change, such as personality.
Recommendation: Ask applicants what they think of your assessment procedures. Ask yourself how you would feel about completing the assessments you use.
COST: DOES THE ASSESSMENT JUSTIFY ITS COST? A good selection method saves the employer more in increased output than it costs to use. Employers can calculate the costs of an assessment (and usually do—so many tests at £5 each, so many psychologists at £500 a day etc.). It is more difficult to calculate the benefits of good selection. Some employers have estimated the cost of getting it wrong, of hiring the wrong person; the US Corporate Leadership Council in 1998 generated estimates based on wasted salary and benefits, severance pay, headhunter fees, training costs and hiring time, for three levels of full-time employee: Entry level: $5–7K $20K salary: $40K $100K salary: $300K However, the problem goes deeper than avoiding obvious mistakes. Work performance is not either/or; psychologists argue that it is normally distributed, like the data in Figure 1.6, for people making socks (‘hosiery loopers’). The best
Percentage of employees
30
20
10
0
1
2
3
4
Dozens per hour
Figure 1.6 Distribution of output in 199 hosiery loopers Source:
From Tiffin (1943).
5
6
7
0470861630_02_cha01.fm Page 20 Thursday, December 2, 2004 8:31 PM
20
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
worker does seven dozen pairs an hour, while the worst worker apparently manages only four pairs an hour. The great majority of loopers, however, are in between, falling either side of an average of four dozen pairs an hour. They are not good enough to be outstanding, nor poor enough to be considered for termination. Distributions of this type have been found for other occupations. The distribution in Figure 1.6 means that HR’s job is more than just avoiding employing the handful of very poor workers. HR should maximise the number of really good workers. If the sock factory can find workers whose output is five or six dozen pairs per hour, rather than two or three, they can double output (or halve the workforce). Critics have seen a flaw in this argument. Every organisation in the country cannot employ the best employees; someone has to employ the rest. Good selection cannot increase national productivity, only the productivity of employers who use good selection methods to get more than their fair share of talent. Employers are still free to do precisely that. The aim of this book is to explain how.
GOOD WORK PERFORMANCE Selection research compares a predictor, meaning an assessment, with a criterion, meaning a index of the person’s work performance. The criterion side of selection research presents problems of its own because it requires us to define good work performance. This can be very simple when the work generates something we can count: widgets manufactured per day or sales per week. The problem can also be simple if the organisation has an appraisal system whose ratings can be used. The supervisor rating criterion is widely used, because it is widely available, simple and hard to argue with. How do you know X is a good worker? Because management says so. On the other hand, the criterion problem can get very complex, if we want to explore rather deeper into what constitutes effective performance. Questions about the real nature of work, or the true purpose of organisations, soon arise: • Is success better measured objectively by counting units produced, or better measured subjectively by informed opinion? • Is success at work unidimensional or multidimensional? • Who decides whether work is successful? Different supervisors may not agree. Management and workers may not agree. The organisation and its customers may not agree.
Recommendation: Ask yourself what definition of good work performance your organisation presently uses.
0470861630_02_cha01.fm Page 21 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
21
CURRENT SELECTION PRACTICE We have a lot of surveys to tell us how selection is currently done in North America, the UK and Europe. We know rather less about the way it is done in the rest of the world.
The USA Figure 1.7 summarises a recent survey of 250 US employers (Rynes et al., 1997). Reference checks are the most frequently used assessment, followed by structured interviews, drug tests then unstructured interviews. Biodata, assessment centres and psychological tests, of either personality or ability, are rarely used. Another survey (Harris et al., 1990) probed deeper, and asked why personnel managers choose or do not choose different selection methods: • the most important factor was accuracy; • the least important factors were cost and fairness; • factors of middling importance were fakability, offensiveness to applicant, and how many other companies use the method. Another survey, by contrast, asked personnel managers why they did not use particular methods (Terpstra and Rozell, 1997): • some techniques they did not think useful: structured interviews, mental ability tests; • some they had not heard of: biodata; • some they did not use for fear of legal problems, notably mental ability tests.
Frequency of use
5 4 3 2 1
ie w in te rv ie w A bi lit y Pe te rs st on al ity te st Bi od A ss ata es ce sm nt e r n W e t or k sa m pl e Dr ug tes t W or kt ria l
en
rv te
St ru
ct
ur
ed
In
er Re f
G
ra
de
s
ce
0
Figure 1.7 Survey of 251 American employers, showing methods used to select experienced staff (1 = never; 3 = sometimes; 5 = always) Source:
Data from Rynes et al. (1997).
0470861630_02_cha01.fm Page 22 Thursday, December 2, 2004 8:31 PM
22
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 1.5 The US Merit Principles Survey Method
Extent used Great
Moderate
71 69 39 36 38 34 20 15 10 7
25 26 46 41 44 28 37 28 19 19
Prior work experience Interview Quality of application Reference Level of education Major field of study Personal recommendations College GPA Reputation of college Written test Source:
Data from US Merit Systems Protection Board (2000).
The most recent survey (US merit Systems Protection Board, 2000) describes the US government sector, and lists methods used to ‘a great’ or ‘a moderate’ extent. Table 1.5 shows that interviews are still the most popular assessment, with application form and reference also widely used. The US public sector avoids written tests, and gives little weight to which university people went to, or how well they did there.
The UK The most recent UK survey (Hodgkinson et al., 1996) finds the traditional combination of application, reference and interview used by most employers, but being joined by other methods, including ability and personality tests and assessment centres (Figure 1.8).
Frequency of use
5 4 3 2
ta se ss m ce en nt t re
da
As
te ity al on
Bi o
st
st te y lit Pe rs
bi
in ed ur ct
A
te
te rv
rv
ie
ie
w
w
e In
Re fe
re
nc
St ru
A
pp
lic
at
io n
fo
rm
1
Figure 1.8 Selection methods used by 176 UK employers (1 = never; 3 = sometimes; 5 = always) Source:
Data from Hodkinson et al. (1996).
0470861630_02_cha01.fm Page 23 Thursday, December 2, 2004 8:31 PM
ASSESSMENT IN THE WORKPLACE
23
Other UK surveys have looked at more specialised areas: • For graduate recruitment, employers use application form, interview and reference at the screening stage, then interviews or assessment centres for the final decision. • Recruitment agencies, used by many employers to fill managerial positions, always use interviews, usually use references, sometimes use psychological tests, but rarely use biodata or graphology. • University staff are selected by application form, reference and interview, and virtually never by psychological tests or assessment centres. Most universities also ask applicants to make a presentation to existing academic staff—a form of work sample test. • One-third of the British workforce work for small employers, with fewer than 10 staff, where employers rely on interviews (where they try to assess honesty, integrity and interest in the job, rather than ability).
Europe The Price Waterhouse Cranfield survey (Dany and Torchy, 1994) covers 12 Western European countries and 9 methods. The survey reveals a number of interesting national differences: • the French favour graphology, but no other country does; • application forms are widely used everywhere except in the Netherlands; • references are widely used everywhere, but are less popular in Spain, Portugal and the Netherlands; • psychometric testing is most popular in Spain and Portugal, and least popular in West Germany and Turkey; • aptitude testing is most popular in Spain and the Netherlands, and least popular in West Germany and Turkey; • assessment centres are not used much, but are most popular in Spain and the Netherlands.
Elsewhere We know much less about selection in other parts of the world. Surveys of New Zealand (Taylor et al., 1993) and Australia (Di Milia et al., 1994) find a very similar picture to the UK; interview, references and application are virtually universal, with personality tests, ability tests and assessment centres used by a minority. The same seems true of Africa: selection in Nigeria and Ghana generally uses interviews and references, occasionally paper and pencil tests, work samples or work simulations.
0470861630_02_cha01.fm Page 24 Thursday, December 2, 2004 8:31 PM
0470861630_03_cha02.fm Page 25 Thursday, December 2, 2004 8:31 PM
CHAPTER 2
Using Psychometric Tests
OVERVIEW • The need for standardising of assessment procedures, and how to achieve it. • Understanding the test administration process. • Guidance on running a Test Administration Workshop. • Stages in handling test results, from raw scores, through standard score systems, norming scores, recording and reporting scores. • A case study covering the mechanics of a selection centre.
INTRODUCTION What is a psychological test? The British Psychological Society (BPS) provides the following definition: a procedure for the evaluation of psychological functions. Psychological tests involve those being tested in solving, performing skilled tasks or making judgements. Psychological test procedures are characterized by standard methods of administration and scoring. Results are usually quantified by means of normative or other scaling procedures but they may also be interpreted qualitatively by reference to psychological theory. Included in the term psychological test are tests of varieties of: intelligence; ability; aptitude; language development and function; perception; personality; temperament and disposition; and interests, habits, values and preferences.
There are two main types of psychometric test used in workplace assessment: • tests of maximum performance, or mental ability tests; • tests of typical performance, or personality tests. Tests of mental ability in turn divide into general mental ability, aptitude and achievement tests:
0470861630_03_cha02.fm Page 26 Thursday, December 2, 2004 8:31 PM
26
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Achievement or attainment tests assess how much someone knows about a particular body of knowledge, e.g. use of Microsoft Excel. Conventional school and university exams are another type of achievement test. So too are professional exams such as CIPD. • Aptitude tests assess how easy it would be for someone to acquire knowledge or skill they do not presently possess, e.g. of computer programming. • Tests of general mental ability assess how good the individual is at understanding and using information of all types.
THE BENEFITS OF USING PSYCHOMETRIC TESTS When seeking to calculate the benefits to an organisation of using psychometric tests we should ask several questions: • • • •
What is your organisation doing now about testing? How is testing being used? Does the organisation need to use tests? Does the organisation have a written policy document on using tests in the workplace?
Testing can benefit an organisation in a range of ways: • • • • • • • • • •
helping with selection by identifying staff with potential; helping with development by identifying staff with potential; improving morale; demonstrating commitment to people; increasing retention; reducing costs and saving time, which can be converted into bottom-line profit; producing in-house benchmarks and norms for good performance; objectively assessing people against known standards and tests; demonstrating fairness to all; ensuring consistency over time, through the application of valid and reliable ways to assess people.
ADMINISTRATION OF TESTS The Need for Standardisation One of the greatest influences on test results, besides candidate performance, is the way that tests are administered. Variable, erratic and inconsistent administration practice is reflected in inaccurate test results in candidates. An obvious example would be varying the recommended time in an ability test; another would be rushing through administration instructions and practice examples, leaving candidates unclear about how to approach questions and give their answers. One reason the traditional
0470861630_03_cha02.fm Page 27 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
27
interview does not work very well is precisely the lack of standardisation; interviewers are inconsistent in what they ask different candidates. Besides making the interview less accurate, this can create grounds for complaints of unfairness. Test administrators should be trained to conduct every testing session following the same, rigorous standard routine. Every person who takes a particular test anywhere should experience the same administration, because this will ensure their responses are comparable. With personality and interest inventories, test takers should be able to focus on themselves in a relaxed, purposeful way, to give their own preferred responses free from outside distraction. The process of making everything the same, and thus fair, for each test taker is called standardisation.
Test Administration Certificate In the UK the BPS plays a key role in monitoring test standards and reviewing test materials. The BPS (2003) has published Psychological Testing: A User’s Guide, which gives valuable guidance about using psychological tests and the principles of good test use, including a summary of the areas covered by tests and testing in the test use guidelines, published by the International Test Commission (2001). The BPS offers a Test Administration Certificate (TAC), which qualifies people to administer, score and norm psychological tests, but not to interpret them or explain the results. The TAC covers the principles of test planning, and the administration and scoring of all types of test (ability, personality, motivation, stress indicators and interest inventories). Holders of the TAC are not qualified to purchase tests or give feedback to candidates regarding test results, and must be supervised by someone who is a fully qualified test user. The TAC is useful for any tester who wishes to delegate the actual test administration. Any verified Assessor, usually a Chartered Psychologist verified by the BPS, can provide training leading for the TAC. The BPS Psychological Testing Centre keeps a register of verified assessors. This chapter breaks the testing process down into the five main areas recommended by the BPS’s TAC. A training course or workshop dealing with test administration will usually begin with an introduction to psychometric testing and tests before moving on to the administration process. What follows is based on a typical workshop programme, as provided by B.C.
Recommendation: Make sure that anyone who administers psychological tests in your organisation has been trained to do it correctly, and that they have a BPS Certificate at the right level.
Standardised Administration Ability testing assumes that test scores of candidates assessed in Penzance will be comparable with those of candidates assessed in Aberdeen, or anywhere. It is
0470861630_03_cha02.fm Page 28 Thursday, December 2, 2004 8:31 PM
28
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
important to try to ensure that every test administrator operates according to the same rules, wherever they are located. The way in which tests are administered can have considerable impact on candidates’ results and is a key element of the reliability of test results. There are six key stages in test administration: • • • • • •
preparing for testing; briefing the candidates; administering the test; debriefing the candidates; scoring the test; managing the results.
PREPARING FOR TESTING Personal Preparation The person giving the test should be thoroughly familiar with it; they should know the administration instructions, be able to explain the practice questions, and ideally have completed the entire test themselves. They should also have practised the full administration of the test on someone who is not applying for a job. It is very useful to draw up a preparation checklist, like the one in Figure 2.1.
Test administration preparation checklist Freedom from distractions ● ● ● ●
Telephone removed or disconnected. Notice on door Please do not disturb, testing in progress. Adequate distance between candidates. Candidates facing a wall if possible.
Good working conditions ● ● ● ● ● ●
Quiet, inside and outside room. Good lighting. Comfortable temperature. Flat, even working surface. Adequate working area. Clock where candidates can see it.
Resources required to administer tests ● ● ● ● ● ●
Stopwatch. Question books, answer sheets. Paper for rough working, if used. Pencils etc. Calculators, if required. Computer equipment, if required.
Figure 2.1 Checklist for preparing for test administration
0470861630_03_cha02.fm Page 29 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
29
Preparing the Test Session Allow plenty of time before the session; testing should never be rushed. Chose the time and location carefully. Time of day is important; morning is best. It is also best to use the same time on each testing occasion. An acceptable number of candidates for one administrator is 12 (although less able or less well motivated people might need to be tested in smaller groups). Make sure you have all the test material you will need; remember it can only be obtained by mail order. Check the test material for marks; the last time it was used someone may have written the correct answers on the question books (or incorrect answers!). Also prepare your testing log (Figure 2.2), for recording date, time, who is tested, with which tests, any problems that arise, general comments and observations. Always use a stopwatch and time to the second; never try to time tests using the minute hand of an ordinary watch.
The Testing Room The room used for testing should be quiet, and have enough space for the number of candidates you plan to test. Candidates should have enough room to spread out question books, answer sheets and rough paper, and should be far enough from each Test Session Log
Date:
Administrator:
Number of Candidates:
Names of Candidates 1 2 3 4 5 6
7 8 9 10 11 12
Position recruited for: Name of test:
Location of Test Centre: Start time Finish time
1 2 3 4 5 Materials checklist Administration instructions Test booklets Test answer sheets Report, record forms Norm tables Calculators Stopwatch General comments about session
Erase stray marks Scrap paper Pencils (2 each) Erasers
Figure 2.2 A typical tester’s test log
Provide spares
0470861630_03_cha02.fm Page 30 Thursday, December 2, 2004 8:31 PM
30
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
other to avoid distraction and the possibility of copying. It is surprisingly difficult in many organisations to find a suitable room; the boardroom is often suitable.
Recommendation: Always plan your testing session thoroughly and well in advance. Use checklists of everything you will need.
BRIEFING THE CANDIDATES Letter of Invitation Every test session requires the informed consent of candidates; ideally they should agree, in writing, to be tested. Figure 2.3 shows a typical letter sent to candidates about two weeks beforehand. With the letter you can send the leaflet, supplied with most good tests, that explains the test and gives some practice questions. Candidates should know what is expected of them when they turn up on the day. This makes your task much easier by preparing them well in advance. A surprising number of employers neglect to tell candidates that the ‘interview’ includes psychological tests. On the day, as candidates arrive, you should explain to them once again the arrangements for the day. This will help establish rapport, put the candidates at their ease, and allay any fears they have. In order to standardise what you say to everyone on each occasion, it is a good practice to prepare a checklist. A useful checklist is outlined in Figure 2.4.
Dear Ms Jones, Selection/Development Centre I have much pleasure in inviting you to attend a selection/development interview day for the Newco Organisation on Tuesday 31st June, at 10:30 a.m. in the Company Training Centre. You do not need to bring anything with you: all materials will be supplied and lunch provided. The day will include a 20-minute interview, a brief presentation from you about how you see your role in this organisation, and some psychometric tests of ability and personality. Please find enclosed some leaflets that describe the tests, and give some practice questions you can try, as well as information about your rights as a test taker. Oral feedback on the tests and a brief written summary will be given to you before you leave. The day should finish around 5 p.m. Please let me know if you have any special needs or requirements before the day begins so that I can make suitable arrangements. Please confirm your availability as soon as you can. If you telephone then you can of course ask any questions. I look forward to hearing from you, Yours sincerely,
Figure 2.3 A typical letter inviting someone to a test session
0470861630_03_cha02.fm Page 31 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
Introduction checklist ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
Introduce yourself Arrangements for the day Purpose of testing Skills measured, relevance to job Number of tests, how long each Check received invitation letter and practice leaflets Ask if any candidates have done these tests before. If so, enter on log and continue Explain need to read administration instructions from card Spectacles? Special needs? Calculator? Turn off mobiles and pagers Location of toilet; use now Location of fire exit, drill No smoking Water The rest of the selection procedure Feedback Any questions? Get going!
31
Check off ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ ⵧ
Figure 2.4 A typical checklist for test administration
Recommendation: Make sure that people know they will be given tests, both when you invite them and again when they arrive on the day.
ADMINISTERING THE TESTS Test Administration Card It is essential that everyone who does a particular test gets the same instructions. If they do not, their scores may not be comparable, nor can you be certain your data will be comparable with the normative data in the test manual. A test administration card is designed to standardise conditions for that particular test, and to ensure consistency over all test sessions at a single centre, as well as between different test centres. Some tests are provided with administration cards as part of the testing kit; for others it is necessary to prepare your own. Distribute test material, and check every candidate has everything they require. Then read out the test instructions from the administration card. Answer questions by repeating from the administration card as far as possible.
0470861630_03_cha02.fm Page 32 Thursday, December 2, 2004 8:31 PM
32
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Practice Questions At the beginning of most tests, there are a number of example questions. Give candidates adequate time to answer these and do not pressurise them in any way. Walk quietly around the room to check that candidates are putting their answer marks in the correct place and following instructions. It is better to avoid using tests that do not have example or practice questions, as plunging straight into the test proper can put some candidates at a disadvantage, and give less accurate results. When the candidates are ready, give the answers to the practice questions. Candidates should understand the examples themselves without becoming flustered or feeling they are being criticised. Invite questions and give your answer to the whole group. It is of course essential that you are thoroughly familiar with the practice questions. Judge for yourself when to move on. It is important to press on quickly.
Timing the Test Face the candidates, be vigilant. Then start the test, start your stopwatch and record the finish time. Walk around checking that candidates are putting their marks in the right place. During the test sit in full view of the candidates and walk around occasionally. Maintain an atmosphere of quiet urgency. When the time is up, say in a firm tone ‘Pencils down, please. Stop writing’. Collect all test papers from the front, walking to the back of the room. (Some people think they should give 5 minutes’ warning of when time will be up: this is not part of the standard administration procedure for most ability tests, and should not be done.)
Recommendation: Always plan your testing session thoroughly on the day. Use checklists of everything you will need.
Difficult Situations During Testing Here is a list of possible problems that might arise during testing, and suggested ways of dealing with them. Any of these should be noted in the test log. • cheating: decide about disqualification; • fire alarm: local rules as instructed during introduction; note time of interruption; restart if/when possible; • candidate taken ill: stop, deal with emergency, restart if time permits; decide on a re-test for that person; • candidate talking: be firm; • people entering the room: escort out quietly; check ‘Do not disturb’ notice; • candidate opting out: let them go quietly;
0470861630_03_cha02.fm Page 33 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
33
• candidate arriving late: start them at the end of current time or arrange another date; • power cut: stop everyone; restart when power returns; • disruptive candidate: be firm; call security if necessary; escort out quietly; • candidate carrying on after time: repeat instructions loudly, to their face; consider disqualification. Can you think of any other eventuality?
DEBRIEFING CANDIDATES It is important to leave candidates with a good impression of your organisation and testing programme by effective debriefing at the end of a test session. First, collect everything into separate piles before candidates leave the room, to ensure no question books or answers are taken away. Tear up rough work, to reassure candidates it will not be assessed. Next, thank the candidates, and tell them what is going to happen next and how they will receive feedback. This is most important. Then say goodbye to each candidate. Check all reusable material for stray marks. If possible erase the marks, otherwise destroy any defaced material. Remember, test materials are expensive! After candidates have left the room and you have collected all the materials, it is advisable to begin marking their test papers. This allows you to make sure that all the papers are in order and that answers have been marked in the correct place. Marking at this time allows you an element of recovery time before candidates leave the building, during which some errors could be corrected.
Recommendation: Always offer people feedback on the tests they have completed. Make sure they know what the feedback arrangements are before they leave.
SCORING TESTS Test scoring instructions should be followed precisely. Most tests come with scoring keys or templates. Check you are using the correct answer key and that it is lined up correctly. Use a different colour pencil to the writing on the answer form. Until you are entirely familiar with the scoring, check three times and ask a colleague to check as well. Be systematic when scoring: go round the template the same way every time; count correct answers, count incorrect answers, then sum to total as another check. A further check is to reverse your marking order. Double check the alignment of template and totals. Remember, ‘rubbish in equals rubbish out!’ This may all seem very elaborate, but it is surprising how easy it is
0470861630_03_cha02.fm Page 34 Thursday, December 2, 2004 8:31 PM
34
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
to make mistakes in scoring, and how often even very intelligent people make them. Stepping errors occasionally occur, when candidates lose their place on an answer sheet and complete the answer sheet out of sequence. Where five or more consecutive questions are answered incorrectly, move the scoring key up or down two lines to see if this results in a string of correct answers. If so adjust accordingly, correcting when error stops. Make a note in the test log for that candidate.
MANAGING RESULTS Recording Results Candidates’ results should be entered onto a standard report form. If the testing has been done by someone who is only qualified as a test administrator, they should pass on the data to someone who is qualified to interpret the results, give feedback to candidates, or make decisions based on the results.
Confidentiality and Security The purpose of testing is to find something out about people. Such findings are sensitive and need to be kept secure. Confidentiality must be maintained and information kept secure under lock and key. This also applies to blank test materials, answer books and test manuals. Access by unauthorised persons to any test materials including confidential data must be restricted on a ‘need to know’ basis. As a rule, candidate data should be shredded after 12 months, which is the generally accepted ‘shelf life’ for test data. (After 12 months, it is better practice to test the candidate again.)
Feedback Always remember that the results of testing belong to the candidate, as well as to the organisation commissioning the testing. It is important that candidates are given feedback as soon as possible after testing. The organisation should frame a policy about this. Usually the Level A test user will give feedback orally or in writing; giving candidates feedback about test results needs skill, and should not be attempted by unqualified persons. Feedback should be factual, and should avoid making any judgements about the candidate. In the past some employers have followed a policy of not offering feedback, to save time or to avoid arguments. This is very poor practice; it is both unethical and bad public relations. The test administrator should explain to the whole group that decisions made about candidates will not be made solely based on test results. Other information will also be used, from interview, CV and application form, references, a presentation, group exercises etc.
0470861630_03_cha02.fm Page 35 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
35
Recommendation: Prepare a policy document about access to test material, and get it agreed by the organisation.
INTERPRETING TEST SCORES Ability tests produce raw scores—how many correct answers the applicant gives— which mean very little in themselves. The raw score is usually interpreted by comparing it with normative data, e.g. scores for 1000 applicants, which tells test users whether the score is above average or below, and how far above or below average. Normative data—norms for short—allow us to compare one person’s score with another, and to take account of age, job, experience, educational background etc.
Selecting a Norm Group Choice of norm group can make a big difference to how we view a raw score. Imagine that the same raw score is achieved by: • a 16-year-old school leaver; • an experienced 35-year-old middle manager; • a graduate management trainee. Is it fair to compare these three scores? Clearly no, so the raw scores must be normed, i.e. interpreted using an appropriate norm table. The reference point used in psychometric testing to compare scores between people is known as the norm group. The norm group is a set of scores obtained on the same test by a group of people comparable to the candidate. Test administrators should choose the norm group that best fits the candidates being tested (or be advised by the qualified test user). Considerations when choosing an appropriate norm group from the test manual include: • • • • • •
age gender ethnicity educational level work experience job function.
The normative sample should be large, relevant and recent. Comparing applicants with 2000 people who have applied for the same job within the past three years is better than comparing them with only 50 people doing a roughly similar job in Czechoslovakia in the 1930s. (Normative data for some tests are as old as this,
0470861630_03_cha02.fm Page 36 Thursday, December 2, 2004 8:31 PM
36
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
and can be very misleading. Scores on some tests, e.g. Raven Progressive Matrices, have increased considerably since the 1940s, so using 1940s normative data would systematically overestimate candidates’ performance relative to people today.) Choosing normative data often involves an element of compromise. If you are testing bank managers in Bristol, the ideal normative group clearly is Bristol bank managers. However, the available normative data may offer nothing closer than (British) managers (in general), or Australian bank managers. This poses some interesting questions. Are Australian bank managers likely to differ from British managers? Are they a better match for Bristol bank managers than British managers in general? Collecting your own normative data is a very good way of ensuring you can make the most relevant comparisons. Once you have tested 100 persons, for a particular post or group of posts, you can compute your own provisional local norms. Most normative data for ability tests is occupation specific: bank managers, motor mechanics, call centre workers etc. Few tests can offer any broader comparisons. It would be useful for some purposes to have norms based on representative cross-sections of the whole population of the country. The organisation could then select people in the top 25% or screen out people in the bottom 25%, of ‘everyone’. Getting a representative cross-section of everyone—representative for age, gender, ethnicity, education, location, social background etc.—is very expensive, and can never be entirely successful because a large proportion will decline to be tested.
Normative Score Systems Raw scores should be converted into some form of normative score, using information in the test’s manual. Various normative score systems are used (Figure 2.5): • • • • •
mental age and IQ percentiles z scores and T scores stens and stanines grades or bands.
Mental Age and Intelligence Quotient (IQ) A person with a mental age of 8 does as well on the test as the average 8-year-old. If that person is actually aged 16, they are somewhat behind for their age. IQ was originally a mental age system, calculated by dividing mental age by actual (chronological) age, and multiplying by 100. The 16-year-old with a mental age of 8 has an IQ of 8/16 × 100, which works out at 50. Mental age systems are not very useful for workplace assessment because intelligence stops increasing rapidly at around age 16, just about the time people start work.
0470861630_03_cha02.fm Page 37 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
37
z scores
–2.5 –2.0 –1.5 –1.0 –0.5
T scores
25
Stens
30
1
35
2
Stanines
1
IQ
70
40
3
2
45
4
3 85
0
0.5
1.0
1.5
2.0
2.5
z scores
50
55
60
65
70
75
T scores
5
4
6
5 100
7
6
8
7 115
9
8
10
9 130
Stens
Stanines IQ
Figure 2.5 Distribution of mental ability scores, showing mean, standard deviation, z and T scores, stens, stanines and IQs
Percentiles Percentiles interpret the raw score in terms of what percentage of the norm group scores lower than the candidate. Percentiles are easy for the layperson to understand. To give an example, for a sample of 466 British health service managers, a raw score of 7 on the Graduate & Managerial Assessment—Numerical translates into a percentile of 30, meaning that someone who scores 7 gets a better score than 30% of health service managers. The 50th percentile is the average. Percentiles are quite easy to calculate. If a candidate is ranked 15th out of 50, then their percentile is: 35 × 100 50 − 15, i.e. 35, so: --------------------- = 70th percentile 50 Percentiles have a problem, which Figure 2.6 illustrates. Percentiles bunch together in the middle of the scale and spread out at the ends. On most tests, a lot more people score close to the average than score very high or very low. In Figure 2.6, there are a lot of scores between 16 and 22, and far fewer between 0 and 15, or between 22 and 40. This means that in the middle range of scores, a difference of 2 in raw score corresponds to a big difference in percentiles—31 to 69. But for very
0470861630_03_cha02.fm Page 38 Thursday, December 2, 2004 8:31 PM
38
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
−2½ Standard deviations or z scores
−2
−1½
−1
−½
0
+½
Percentiles
1
2
7
16
31
50 69
Raw scores
0–10
11–13 14–15 16–17 18
+1
+1½
+2
+2½
84
93
98
99
19 20–21 22–23 24–28 29–33 34–40
Figure 2.6 The relationship between percentiles and raw scores
high (or very low) scores a difference of 2 in raw score corresponds to a much smaller difference in percentiles. In fact all the raw scores between 34 and 40 fall at the 99th percentile. If you have two or more scores and wish to calculate an average, do not calculate this with percentiles; use z or T scores. Percentile is not the same as percentage of correct answers. Candidates often think that their performance is evaluated by calculating what percentage of the questions they got right. In fact, this is hardly ever used to interpret test scores, because it actually tells us very little.
Standard Scores Standard scores are based on standard deviations and the normal distribution, so need a degree of statistical knowledge. They still serve the basic purpose of telling us whether the candidate is above or below average, and how far above or below. The three most widely used standard score systems are: • z scores • T scores • stens.
z Scores The raw score is converted into a z score using the formula raw score – sample mean z = -------------------------------------------------------------sample standard deviation Here is a numerical example. On the AH4 test, candidate A’s raw score is 101. The AH4 manual gives normative data for 1183 naval ratings. The normative sample’s mean is 75.23, and its standard deviation is 14.58. Calculating z gives a value of +1.77. A z score of +1.77 shows that candidate A scores 1.77 standard deviations above the norm group mean. Here is a second numerical example. Candidate B’s raw AH4 score is 63, which gives a z of −0.83, which means candidate B scores 0.83 standard deviations below the norm group mean.
0470861630_03_cha02.fm Page 39 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
39
The z score system is not very user friendly, because it includes plus and minus signs and decimals, and because two-thirds of z scores will be between +1 and −1.
T Scores The T score system avoids decimals and signs, using the formula raw score – sample mean T = 50 + 10 × -------------------------------------------------------------- sample standard deviation Here is the same AH4 numerical example. Applicant A’s T score is (50+(10×(101− 75.23)/14.58)) which is 67.67. T scores are always rounded to the nearest whole number when used to describe individuals, so A’s T score is given as 68. Applicant B’s T score is (50 + (10 × (63 − 75.23)/14.58)), which works out at 42. T scores and z scores give exactly the same information, but T scores are more user friendly, avoiding minus signs or decimal points. T scores have a mean of 50 and a standard deviation of 10. T scores usually run from 20 to 80. Table 2.1 shows how many people will fall in different T score ranges in a normal distribution. Are naval ratings a good comparison group? Could they be taken as a crosssection of the British population of the day? Probably not. Apart from the fact that all 1183 are men, the navy, even in the days of conscription, could choose its recruits, and may not have chosen the less able. This would mean that the naval rating average was higher than the general population average. IQs were originally derived from mental ages, but where they are used today, they have been turned into standard score systems, where the mean is set at 100 and the standard deviation at 15. In fact IQs are not used much for several reasons: • IQs need very good normative data based on large cross-sections of the entire population, which few tests have. • IQs create a misleading impression of precision, by appearing to place people on a 60-point scale from 70 to 130. Ability tests are nowhere near accurate enough to achieve such precision. • The name ‘IQ’ tends to alarm people unnecessarily. Table 2.1 Proportion of people tested falling into different T score ranges, in a normal distribution T score
Percentage
Interpretation
Over 80 71–80 61–70 51–60 41–50 31–40 21–30 20 or under
Less than 1 3 13 33 33 13 3 Less than 1
Very high High High average Low average Low Very low
0470861630_03_cha02.fm Page 40 Thursday, December 2, 2004 8:31 PM
40
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Sten or Standard of Ten A very useful 10-point scoring system, a sten has a mean of 5.5 and a standard deviation of 2. To calculate a sten, use the formula: sten = (2 × z) + 5.5 Here is a numerical example, for the AH4 raw score of 101, where we have already calculated the z score as +1.77. The sten is (2 × 1.77) + 5.5, which is 9.04. Stens are rounded to the nearest whole number, giving 9. Many test manuals give tables to convert raw scores to stens, saving you the trouble of calculating them. A variation on the sten is the stanine score or standard of nine. These are not used often but in case you come across one, they are a nine-point system, calculated using a variation on the sten formula: stanine = (2 × z) + 5.
Grades or Score Bands Scores are divided into grades or bands, typically on a five-point scale, A to E, based on percentiles (Figure 2.7). Grades are easy for the layperson to understand. A grade or sten system may seem to be wasting some of the information the test provides, by dividing candidates into no more than 5 or 10 levels of ability. In fact the typical ability test does not assess ability all that precisely, and cannot reliably distinguish more than 12 levels of ability, so the sten system does not really lose that much information. Recommendation: Make sure you are familiar with the manual for the test(s) you are using, and that you know how to interpret the raw scores, or else make sure the person interpreting the test scores has the correct expertise, and has completed the appropriate training.
COMPUTER ADMINISTRATION Many tests can now be administered by computer. For one-to-one administrations, this can save an enormous amount of time, not only in administration itself, but also Five grades of mental ability E = bottom 10% (of norm group) D = next 20% C = middle 40% B = next 20% A = top 10%
Figure 2.7 A typical five grade or band categorisation of scores. Note that % refers to percentage of the norm group, not to percentage of correct answers
0470861630_03_cha02.fm Page 41 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
41
in scoring and norming. The computer is perfectly designed to handle administration. For a group administration each candidate will need their own computer, which has obvious cost implications. Computers can be relied on to score the test accurately, and to calculate normative scores. Care needs to be taken regarding computer-generated reports, which can sometimes lack the subtlety of human interpretation.
EQUAL OPPORTUNITIES The guiding principle in legislation is fairness for all candidates, at every stage of selection decision making. We are all entitled to this fundamental right, regardless of gender, race, religion, colour, ethnic origin and disability. The law supports this principle in the UK, the USA, Europe and many other countries. As a guide to managers, testing should be based on: • a thorough job analysis identifying competencies and job demands; • establishing a matching person specification to map personal attributes across to the job analysis; • using valid and reliable tests to assess those personal attributes in the applicants; • demonstrating equal opportunity principles throughout the procedure.
Agencies In the UK, there are three bodies which help enforce equal opportunities law, and issue codes of conduct: • Equal Opportunities Commission, which issued its Code of Practice in 1985. • Commission for Racial Equality, which issued its Code of Practice in 1984. • Disability Rights Commission, which issued its Code of Practice in 1996.
Gender, Ethnicity and Age Employers should record information on gender, ethnicity and age, at every stage of selection, to check for adverse impact. This information should be kept separately and not used to make any decisions about individual candidates.
Disability Discrimination legislation has some specific implications for psychological testing: • Testers must accommodate disabled candidates as far as possible. For example, a candidate with a visual impairment might be provided with large print or Braille materials.
0470861630_03_cha02.fm Page 42 Thursday, December 2, 2004 8:31 PM
42
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Testers need therefore to ascertain if any candidates have disabilities that require accommodation. Note, however, that disability discrimination laws generally prohibit direct enquiries about disability during the selection process, to avoid open discrimination. It is necessary, therefore, to employ a form of words enquiring about candidates’ ‘special needs’ (and explaining what sort of assessments will be made, so the candidate can mention, e.g. visual disability if relevant). The fifth paragraph of the letter in Figure 2.3 shows how this is done. • Accommodation is often a problem with ability tests because they generally have time limits, so any change in format may make the test more difficult. Suppose we use a Braille form for visually handicapped candidates. Is Braille slower to read than conventional print? So does the candidate also need more time? How much more? • Time limits also present difficult decisions when assessing people with dyslexia. Schools and universities often use a blanket extra 50% time allowance in written exams, but this is very crude and unlikely to result in accurate assessment. • The normative data for most tests assume a particular time limit has been used, and are not valid otherwise. • In the long term, we need a new generation of ability tests that do not depend on time limits. One currently available test, the Raven Progressive Matrices, does have normative data for untimed administration.
The implications for people involved in using psychometric tests are extremely important and wide ranging. Test administrators need to be aware of the relevant legislation, and that every stage of using tests could potentially have a discriminatory effect on those taking tests.
Recommendation: Be careful to observe the laws and codes of practice covering assessment of applicants, especially those concerned with accommodating people with disabilities. Keep up to date with the constantly changing law.
DATA PROTECTION The final unit in the TAC standards deals with data protection, particularly when data are stored on a computer. The various UK Data Protection Acts (1984, 1988, 2001) have implications for employers who use tests (and for other selection methods as well).
0470861630_03_cha02.fm Page 43 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS
43
• Candidates should be told how their results will be used. • Organisations should appoint a data controller, and include their name in the consent paragraph on application forms. • For ‘sensitive personal data’, including psychometric tests, the Act requires the explicit consent of the individual. However, it appears that even explicit consent need not be in writing, so long as it is unequivocal. Data controllers should ‘consider the extent to which the use of personal data by them is or is not reasonably foreseeable’. Anyone sent an introductory letter, like the one in Figure 2.3, should know that they are taking a psychometric test, which may be used for selection or development. However, if there is doubt about this, candidates should be told again, before doing the test. • Employees have the right to see their paper-based records, at reasonable intervals and immediately.
Certification: Where Next? All delegates taking training as test administrators are encouraged to continue to the further levels of competence, as part of their professional continuing development and to enhance the good name of testing in the United Kingdom. Holders of the TAC will be exempted from the test administration units of those further qualifications.
CASE STUDY IN TEST ADMINISTRATION Transpo is a distribution company, running heavy goods deliveries in the UK. Transpo has recently advertised in the national and trade press for a Key Accounts Executive. There were 47 enquiries for the application pack and 29 returned applications. After careful consideration of the job analysis and pre-screening of the applications, eight candidates were selected by the HR team to attend a selection centre. The HR manager, Susan Harris, has the BPS Level A and B Certificates of Competence in Occupational Testing. The HR manager is under pressure currently to produce a detailed annual report for the end-of-the-month Directors’ meeting. Accordingly she delegates the setting up and administration of the selection centre to her assistant, Alan Jones, who has recently qualified as a test administrator via the BPS Test Administration Certificate. This will be Alan’s first live test administration exercise so he is very keen to impress his boss that spending money on his TAC was worthwhile. Alan was involved with his boss in the initial job analysis exercise, advertising for the position and pre-screening, but is looking forward to operating independently. Susan has asked him to tackle the task as a project, and to map out all the steps needed, from sending letters of invitation to handing over the completed test results to her at the end of the selection centre. Alan prepares a draft project plan, with allocated days and timing.
0470861630_03_cha02.fm Page 44 Thursday, December 2, 2004 8:31 PM
44
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Memo: Selection Centre for Key Accounts Executive From: Alan Jones To: Susan Harris Dear Susan, Please find enclosed my first draft project plan [Table 2.2] to run the Selection Centre for the new post of Key Accounts Executive. I welcome your comments. I understand that the Selection Centre will be in 20 working days time on Tuesday December 2nd. Sincerely, Alan
Table 2.2 Selection centre for Key Accounts Executive, Tuesday December 2nd Day 1
Date
Actions
Nov. 5th
Post letter of invitation to 8 shortlisted candidates inviting acceptance of interview date for selection centre. Invite responses via email, telephone or letter. Enclose practice leaflets and map to training centre. Book accommodation in training centre, 10 desks, refreshments on arrival, December 2nd. Check test materials, answer books, answer forms, report forms, scoring templates, norm tables. Erase any stray marks in answer books. Order new answer forms for verbal and numerical reasoning tests. Send order form to Susan for authority and signature. Copy test session log. Open materials file for selection centre.
2
6th
3
7th
4
10th
5
11th
6
12th
7
13th
8
14th
9
17th
10
18th
Receive acceptance letters Receive acceptance letters Receive acceptance letters Receive acceptance letters Receive acceptance letters Telephone non-repliers requesting intentions
Send order form to test publisher requesting 24-hour delivery of test materials. Check testing kit for pencils, erasers, stopwatch and calculators. Receive and check test materials from test publisher.
Revise introduction checklist; make notes relevant to job and selection day, check for fire alarm practice December 2nd. Review administration card, refreshing memory as to test procedure, example questions and question completion.
0470861630_03_cha02.fm Page 45 Thursday, December 2, 2004 8:31 PM
USING PSYCHOMETRIC TESTS 11
19th
12
20th
13 14
21st 24th
15
25th
16
26th
17
27th
18 19
28th Dec. 1st
20
2nd
Practice scoring and marking a practice test. Print off 10 candidate report forms. One candidate, B.C., requests large-print materials. Order from test publisher requesting 24-hour delivery. Receive and check large-print materials. Telephone B.C. to reassure. Assemble all materials in test flight case. Sharpen pencils. Check every item against numbers of candidates. Check with training and catering over room allocation and refreshments. Print off schedule for day for each candidate. Include times for tests, interview, presentations and feedback. Progress report to Susan. Final checks, room, refreshment, timing, lunch, interviews and presentations. Print off schedule for interview panel members. 09:30 meet candidates, briefing 10:00 commence test administration 12:00 dismiss and thank candidates 12:10 score tests 1:00 prepare report forms 2:00 hand over test data to Susan.
Memo: Selection Centre for Key Accounts Executive From: Susan Harris To: Alan Jones Thank you, Alan, for your schedule for the Selection Centre. You have put a lot of time in here and it looks as if you have covered everything. Well done. We mustn’t forget candidates’ expenses so would you mind please collecting some expense forms from accounts and handing them out during your introduction? Thank you once again, your TAC is paying off. I look forward to receiving the completed candidate results on Tuesday at 2:00 p.m. Regards, Susan
45
0470861630_03_cha02.fm Page 46 Thursday, December 2, 2004 8:31 PM
0470861630_04_cha03.fm Page 47 Thursday, December 2, 2004 8:31 PM
CHAPTER 3
Tests of Mental Ability
OVERVIEW • • • • • • • • • • •
Ability tests divide into achievement, aptitude and general ability. Some tests seek to assess more specialised abilities, such as creativity. All tests of mental ability tend to go together, i.e. correlate positively. We are not yet sure what emotional intelligence tests measure. Mental ability tests predict work performance fairly well. The link between mental ability is continuous and linear: the brighter someone is, the more likely their work performance is to be good. Research on mental ability and team, rather than individual, performance suggests more complex relationships. Low scorers may hold the whole team back. Research on why mental ability tests predict work performance suggests that mental ability leads to improved job knowledge, which in turns leads to better work performance. Mental ability tests create adverse impact on some ethnic minorities in the USA; there is a dearth of information on this issue elsewhere. Attempts to deal with the adverse impact problem include score banding, which defines a range of scores as equivalent, thus allowing selection within the band to be based on achieving diversity. Validation research tends to underestimate validity of assessment, because it is limited by small samples, unsatisfactory measures of work performance and restricted range; research needs therefore to be analysed carefully.
INTRODUCTION If you only have an hour to assess an applicant, some psychologists argue that the best way to use your time is to give an ability test. Ability tests give a reasonably accurate assessment of one key aspect. The more usual choice of a one hour
0470861630_04_cha03.fm Page 48 Thursday, December 2, 2004 8:31 PM
48
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
assessment—the interview—may try to cover much more ground, but is much less accurate. It is of course better to set aside more time to assess candidates: after all, they will be working for you for a lot longer than one hour! Figure 3.1 gives some varied (but fictitious) test items divided into achievement, aptitude and ability tests, which were defined in Chapter 2 (p. 26). Most tests use the multiple choice format shown in Figure 3.1, which makes scoring quick and Achievement 1. Which size of fuse should be used for a 3 kW electric fire? 3 amp 5 amp 13 amp
26 amp
2. What did Sigmund Freud describe as the ‘royal road to the unconscious’ alcohol
dreaming
3. Which of the following is/are not a disease? multiple regression metonymy
homosexuality
malaria
toilet training
multiple sclerosis
Aptitude 4. Check whether the names are exactly the same or differ in any way. Smith, John E Smith, John E same different Johnson, Edwin P Jonson, Edwin P same different 5. Which sections of this sentence contain errors of spelling or grammar? In reply to your letter / of the 14th June / I feel it incummbent on me / to strongly advise you / to withdraw the allagation of impropriety / implied by the statement / in the pennultimate paragraph 6. Which seat in a single deck bus gives the smoothest ride? one directly over the back axle one directly over the front axle one midway between front and back axle? Ability 7. What does the word ‘impeach’ mean? preserve
accuse
give a sermon
propose
8. How far is it from New Orleans to New York in miles? 50 800 1300
5000
9. How much is 15 plus 16? 1
51
31
41
10. What is the rate of UK income tax for incomes over £40 000 a year? 25% 40% 75%
90%
11. What do you get if you divide the largest number by the next to smallest number, and then multiply the result by the next to largest number? 20 15 11 14 5 2 110 60 10 300
Figure 3.1 Twelve questions from various types of mental ability test. NB: These are not ‘real’ questions, from ‘real’ tests, but are typical of mental ability test questions
0470861630_04_cha03.fm Page 49 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
49
accurate. Most tests have time limits, which helps ensure the test presents the same level of difficulty to everyone, and makes it easier to plan testing sessions. (However, time limits also create problems for candidates who have dyslexia.)
Tests of General Mental Ability General mental ability is sometimes called cognitive ability or intelligence. Mental ability tests use many different types of question, that vary in content and difficulty. The questions in Figure 3.1 look very diverse, and ‘common sense’ would expect ability to answer one to have little to do with ability to answer another. ‘Common sense’ says knowing what a word means depends on education and home background, whereas mental arithmetic questions, such as question 12, need good concentration and short-term memory. The questions in Figure 3.1 are fictitious but results with similar problems in real tests show fairly high positive correlations; people who are good at vocabulary questions tend to be good also at mental arithmetic. Early on in the study of human intelligence, it emerged that people who are good at one intellectual task tend to be good at others. This makes it possible to talk about general mental ability.
Aptitude Tests Traditionally aptitude means how good someone is likely to be at acquiring a new skill or body of knowledge, such as learning a foreign language. Some aptitude tests seek to assess suitability for a particular profession or professional training, e.g. management, medical school, sales. Aptitudes that are widely assessed include: • • • • •
Clerical: General Clerical Test Mechanical: Bennett Mechanical Comprehension Test Programming: Computer Programmer Aptitude Battery Numerical: Graduate & Managerial Assessment—Numerical Language usage: Differential Aptitude Tests—Language Usage.
Sometimes a number of aptitude tests are combined to form a multiple aptitude battery, such as the Differential Aptitude Tests, which assess verbal reasoning, numerical ability, abstract reasoning, clerical speed and accuracy, mechanical reasoning, and spelling and language usage. In practice the distinction between ability and aptitude is often blurred, especially because most aptitude tests correlate fairly well with tests of general mental ability.
More Specialised Ability Tests Psychomotor or Dexterity Tests These are not paper and pencil tests; applicants assemble small fiddly components or sort larger ones, as quickly as possible. These are useful for assembly work and
0470861630_04_cha03.fm Page 50 Thursday, December 2, 2004 8:31 PM
50
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
some skilled manual work. Fine dexterity is needed for assembling small components using the fingers, while gross dexterity is needed for assembling or handling larger objects using the whole arm. Dexterity can also be assessed by work sample tests.
Culture-Free Tests Few people in the UK, however able, know the distance from New Orleans to New York, but adding 15 to 16 should be possible for anyone whose culture uses numbers. The geography question is more culture bound than the addition question. For years some psychologists looked for culture-free questions, that any one regardless of cultural background could answer. A test that would work equally well in the UK, communist China, and the Amazonian rain forest would be very useful, but has proved difficult to find.
Creativity Critics say ability tests favour ‘convergent’ thinkers, people who can remember textbook answers to questions, but cannot think creatively. They argue that we often need ‘divergent’ thinkers, who can find many equally good answers to a problem, not just one ‘right’ answer. Many tests of creativity have been devised, for example Uses of Objects. The applicant has 2 minutes to list possible uses of a brick, other than building a wall: a door stop, a weapon, a pendulum etc. Or applicants may be asked to design an advertisement for a new product, e.g. a 3D television set. Creativity tests often correlate with standard tests of mental ability very highly, so they cannot prove they are measuring anything new.
Video-Based Testing Several video-based selection tests have been devised, for example for bus drivers and insurance sales staff. Including sound and moving pictures creates a richer test, especially suitable perhaps for work involving a lot of contact with the public, or for trying to assess social/emotional intelligence.
Computerised Testing Paper and pencil tests are inflexible. They irritate able candidates by making them plough through long lists of easy items, and dishearten the less able by asking too many questions they cannot answer. Computerised testing does not have to present a fixed sequence of questions but can be tailored to the individual’s performance. If the candidate does well, the questions get harder; but if the candidate does poorly the questions get easier, until the candidate reaches his/her own limit. Computerised testing seems especially popular in military testing programmes.
0470861630_04_cha03.fm Page 51 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
51
Internet Testing Many Internet recruiting systems include online assessment of ability or personality. Testing people over the Internet is fast and cheap, but creates a number of problems: • Is the person doing the test really John Smith or someone else? • How can you stop people applying several times under different names and so giving themselves unfair practice on the test? • How can the test’s authors protect their copyright, and prevent unauthorised use of the test by unqualified users? Experts have suggested supervised testing centres can solve these problems, but that would lose the convenience and economy of Internet testing. A good test should (a) be reliable, (b) be valid, (c) have useful normative data and (d) not create adverse impact. There are large numbers of dubious looking tests on the Internet that probably fail to do all or even any of these four things. (It is equally easy to generate dubious tests in paper form, but the Internet allows them to be distributed widely, and possibly to cause more damage by giving people information that is not true or which they cannot handle.)
Recommendation: Remember that computer or Internet tests are still tests, and should be critically evaluated before use.
EMOTIONAL INTELLIGENCE Being able to understand others and get along with them is a vital skill in some work which, it is sometimes claimed, intelligent people often lack. Tests of social or emotional intelligence have been devised to fill this gap. They have used a variety of formats: • Identifying emotions expressed in facial expression. • Memory for names and faces. • Written descriptions of interpersonal problems. For example, what is the best way to deal with a noisy neighbour: complain to the police/appeal to his better nature/ threaten him etc. • Sequences of photographs showing an encounter between two people, where the applicant identifies the likely outcome from a choice of endings. • Assessment of ‘tacit knowledge’: who is the best person to ask in an organisation to get a particular task done quickly, and how best to phrase the request. This is information that will not be found in the organisation’s official handbook. • Recent tests of emotional intelligence often resemble personality tests rather than ability tests. There is no time limit, and candidates are describing themselves, not looking for right and wrong answers.
0470861630_04_cha03.fm Page 52 Thursday, December 2, 2004 8:31 PM
52
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Daniel Goleman (1995) has claimed that success at work is ‘80% dependent on emotional intelligence and only 20% on IQ’. On closer inspection, this claim turns out to derive from an analysis of job descriptions, not of test validity. It is certainly true that many job descriptions mention things like being able to work in a team, but it does not follow that emotional intelligence will account for 80% of successful work performance, or that emotional intelligence tests can assess it. Social intelligence tests went out of favour because they were mostly so highly correlated with other tests of general intelligence that there was no reason to suppose they were measuring anything different. Sceptical psychologists ask themselves whether emotional intelligence tests are falling into the same trap. How do we know they are not just assessing familiar aspects of personality, such as sociability or tolerance, under a new name? What we need is research showing that emotional intelligence: • can be separated from personality as well as from conventional intelligence; • predicts work performance; • improves on the prediction of work performance that tests of personality and mental ability can make. Social/emotional intelligence probably is very important in the workplace, but probably cannot be assessed by paper and pencil tests. We may need tests of ‘real life’ behaviour, which are much more difficult to devise, usually much less accurate, and much more time consuming to use. Group exercises and role plays, found in many assessment centres (see Chapter 8), probably assess social/emotional intelligence to some extent.
Recommendation: Think carefully before using tests of emotional intelligence, and ask what evidence you have they will improve on your present assessment.
ERROR OF MEASUREMENT An ability test is not very useful if it says someone is bright today, but two weeks later says the same person is average or dull. A good ability test can achieve high re-test reliability, represented by a correlation between the first and second sets of scores of 0.90. If an ability test cannot achieve a re-test reliability of 0.85, you should probably avoid using it. Calculating re-test reliability means persuading the same set of people to do the same test twice, which is not always that easy, so some test publishers omit this important step. Error of measurement estimates how much test scores might vary on re-test. An ability test with a re-test reliability of 0.90 has an error of measurement of five IQ points, meaning one in three re-tests will vary by five or more points. Clearly it would be a mistake for Smith who scores IQ 119 to regard himself as superior to Jones who scores 118. If they take the test again in three months time, Smith might get 116 and Jones 121. If Smith and Jones are candidates on a shortlist, they should
0470861630_04_cha03.fm Page 53 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
53
be regarded as having the same score on the test, and the decision which to employ should be based on some other consideration such as personality or experience. One reason untrained people should not use psychological tests is that they may not understand error of measurement, and will tend to think test scores are much more exact than they really are.
Recommendation: Never use a test without evidence of its reliability. Do not use an ability test which has a reliability less than 0.85. Always think of a test score as x plus or minus y, where x is the actual score, and y is the error of measurement.
VALIDITY: DO ABILITY TESTS PREDICT WORK PERFORMANCE? The best way of answering this question is research. We need a workforce who have done a mental ability test, and some quantifiable information about work performance. We then compute a correlation between the two sets of numbers. Thousands of researches of this type have been done, and their results summarised by work psychologists. A lot of the research is American but enough has been done in Europe to be sure the same results are found this side of the Atlantic too. Research shows ability tests can select fairly well for most broad types of work. Table 3.1 shows tests achieve corrected correlations of 0.40 to 0.50 for every type of work except sales assistants, where the correlation is less than 0.30. Unskilled and casual work has not been researched, so we do not know whether ability tests are useful. Table 3.1 shows clearly that ability tests do predict performance quite well for most types of work, so are definitely worth using. There are some specific types of work where the link between mental ability and work performance may not be so strong: achievement in science and technology, police officers, pilots and sales ability.
Table 3.1 Summary of validity of mental ability tests for nine broad classes of work Class of job Manager Clerk Salesperson Protective professions Service jobs Trades and crafts Elementary industrial Vehicle operator Sales assistants Source:
Ghiselli (1966).
Validity 0.53 0.55 0.62 0.43 0.49 0.50 0.47 0.46 0.28
0470861630_04_cha03.fm Page 54 Thursday, December 2, 2004 8:31 PM
54
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The ‘Weakest Link’? Selection persists in assessing the individual, and trying to predict the individual’s success at work. Yet most work is done by teams of people, who often depend on each other to get the job done. So should selection consider the team as a whole? And ask some of these questions: • Does a successful team consist of people who are all able, or will a mixture of ability be sufficient, or even better? • Can one person of low ability hold everyone back? • Will one very able person be able to get the group moving forwards, or will he/ she be submerged and frustrated? Only a few studies have tried to answer these questions, and they do not point to a single simple conclusion: • Units in the Israeli army with higher average mental ability perform better. • Groups of assembly workers with higher average ability produce more, and are less likely to fail as teams. • In motor manufacturing, poorer team performance is linked to more variation in ability, and to the presence of low scorers. The smaller the group, the stronger the effect. These results are consistent with a ‘weakest link’ hypothesis, that a few low scorers, or even just one, can hold the whole team back. Research on mental ability and team performance will undoubtedly be a growth area, and may uncover complex relationships.
Setting Cut-Off Scores Selectors are often asked: is this candidate appointable? In other words, what is the minimum level of mental ability necessary to function in this job? The commonest approach to setting cut-offs is: do not appoint anyone who falls in the bottom third of existing post-holders. Strictly speaking, the idea of a fixed cut-off is simplistic. The relationship between test score and performance is linear, meaning the lower the test score, the poorer the person’s performance is likely to be. This implies that any cut-off is arbitrary. If the employer has sufficiently good records, it may be possible to construct an expectancy table (Table 3.2) showing the level of work performance expected for people with different test score ranges. An applicant who scores 35 on the test has a 21% chance of being very good, and a 58% chance of being good or very good. An applicant who scores 8 on the test has only a 2% change of being very good, and a 20% chance of being good or very good. The percentages come from an analysis of existing employees. The employer can decide the level of performance required and set the cut-off accordingly. In practice the decision also depends on how easy it is to fill the vacancy, and how urgently it needs to be filled.
0470861630_04_cha03.fm Page 55 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
55
Table 3.2 Expectancy table, showing relationship between test score and work performance, assessed by performance appraisal Test score
30–40 25–29 20–24 11–19 Under 11
Performance appraisal (%) Very poor
Poor
Average
Good
Very good
7 6 11 13 26
15 20 18 36 31
20 32 44 33 23
37 31 20 15 18
21 11 7 3 2
Necessary But Not Sufficient Some people argue that mental ability is necessary but not sufficient for good work performance. Table 3.3 shows that few accountants had an IQ more than 15 points below the accountant average, whereas quite a few lumberjacks had an IQ well over their average of 85. Assuming the latter had not always wanted to be lumberjacks, the data imply they could not or did not use their mental ability to find more prestigious work. Perhaps they lack some other important quality: energy, social skill, good adjustment or luck.
Incremental Validity Mental ability tests by and large predict work performance quite well. What other assessments are worth using alongside them that will give a better prediction still? There is no point adding new tests to a selection battery if they cover the same ground as tests already in use. Personality tests, work samples and structured interviews seem likely to offer incremental validity on mental ability tests, whereas assessment centres may not. Traditional interviews may not add much to ability tests, because the traditional interview tends to assess intellectual ability, or be strongly influenced by it.
Table 3.3 Average IQ scores of accountants and lumberjacks, and scores for highest and lowest 10%
Accountants Lumberjacks
Lowest 10%
Average
Highest 10%
Average— low
Average— highest
114 60
129 85
143 116
15 25
14 31
0470861630_04_cha03.fm Page 56 Thursday, December 2, 2004 8:31 PM
56
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Predicting Different Aspects of Work Performance Supervisor ratings make a global estimate of work performance, and can be fairly crude. Some employers have developed more precise and differentiated accounts of work performance. During the 1980s the American armed services carried out the world’s largest and most expensive selection research. They started by analysing military performance, to identify five general themes, the first four of which apply equally well to civilian employment: • • • • •
general proficiency technical proficiency effort and leadership personal discipline fitness and military bearing.
They went on to show that the two core measures of job performance, technical proficiency and general proficiency, are best predicted by general mental ability. The other three aspects—effort and leadership, personal discipline, fitness and military bearing—have less to do with mental ability, but are better predicted by personality measures. These results suggest that mental ability tests tell us what people can do, while personality tests tell us what they will do.
Mental Ability, Job Knowledge and Work Performance More sophisticated analysis shows that more able people are better workers primarily because they learn more quickly what the job is about. In high-level work this may mean learning scientific method, scientific techniques and a large body of knowledge. In low-level work it may mean learning where the stock is kept and what it costs. There seems to be no direct link between mental ability and work performance (Figure 3.2).
0.05
Mental ability
0.50
Work performance
0.45
Job knowledge
Figure 3.2 Schematic diagram showing the paths from mental ability to work performance
0470861630_04_cha03.fm Page 57 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
57
ONE ABILITY OR MANY? So far we have been talking about tests that produce a single score, for general mental ability. An alternative approach assesses from 6 to 12 abilities or aptitudes. Table 3.4 lists the eight aptitudes assessed by the Differential Aptitudes Test (DAT). Assessing eight abilities takes longer than assessing just one; administering the entire DAT will take all morning or all afternoon. In theory multiple aptitude batteries should give more accurate predictions of work performance, on the assumption that each job requires a different profile of abilities. Table 3.5 gives a fictitious example for three professional jobs. The architect needs spatial ability and some numerical ability. The accountant needs numerical ability and some verbal ability to write reports. The lawyer needs verbal ability but also some numerical ability. The multiple aptitude battery can be used to generate weighted prediction equations for different jobs, in which each aptitude score is given a different weight according to how well it predicts performance. However, it is important to remember that the abilities assessed by the DAT are highly correlated; people who get high scores on the verbal test tend to get high scores on the numerical test. Large differences between scores within one person are the exception rather than the rule. This suggests that the extra time needed to administer
Table 3.4 The eight aptitudes measured by the Differential Aptitudes Test Aptitude
Example
Verbal reasoning Numerical reasoning Abstract reasoning
Verbal analogies, e.g. cat is to kitten as dog is to . . . Simple arithmetic A series of four diagrams in which, e.g. a line is inclined in an increasing angle from the vertical, the task being to select the correct fifth diagram Reading which of five pairs of letters is underlined in the question book, and marking the same pair on the answer sheet, working to a tight time limit Understanding mechanical principles and devices Being able to say which of four boxes a two-dimensional cut-out would create Detecting spelling errors Detecting incorrect grammar or punctuation
Clerical speed and accuracy Mechanical reasoning Space relations Spelling Language usage
Table 3.5 Weights that might be given to three abilities for three professional jobs
Verbal Spatial Numerical Notes:
Lawyer
Architect
Accountant
×2 — ×1
— ×2 ×1
×1 — ×2
×2, double weighted; ×1, single weighted; —, that ability is not needed for that job.
0470861630_04_cha03.fm Page 58 Thursday, December 2, 2004 8:31 PM
58
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
the whole of the DAT may not add much to the prediction of work performance. Several large-scale analyses have generally confirmed this; we do not get a better prediction of work performance by using an aptitude battery and developing different scoring systems for different jobs. One researcher even tried using the ‘wrong’ scoring system, e.g. selecting mechanics using the electricians’ scoring, and found it gave just as good results as using the ‘right’ scoring. (Test publishers have a habit of ignoring these findings, and selling ranges of aptitude batteries allegedly tailored to different occupations, e.g. Psychological Technician Aptitude Battery, but probably achieving most of their success by measuring general mental ability. This is good marketing but not very good psychology.) Assessing multiple aptitudes may be useful sometimes, where a specific ability proves vital in a particular job. Trainee military pilots with poor visuo-spatial ability tend to fail pilot training, regardless of general mental ability. Visuo-spatial ability means seeing spatial and three-dimensional relationships, and is assessed by tests such as indicating which cubes in a 3D array numbering seven to ten cubes are touching a designated cube. Flying is very definitely a visuo-spatial task so this is not surprising. Fuel tanker drivers with poor selective attention make more mistakes, again regardless of general mental ability. Higher level jobs may be more likely to need specific abilities, so testing may sometimes need to include more than just general mental ability.
THINGS WE PROBABLY DO NOT NEED TO WORRY ABOUT There are some plausible suggestions about ability tests for which there is no evidence, so we may not need to worry about them.
Do We Need Both Motivation and Ability? It is plausibly argued that people need both ability and motivation to succeed in work; lazy geniuses achieve little, while energetic but dim people are just a nuisance. Recent research tested this for 22 different jobs and found no evidence for it (Sackett et al., 1998).
Social Class and Education Sociologists have argued that any apparent link between occupation and mental ability is a creation of the class system, or, as we would say today, is an example of social exclusion. Children from better-off homes get better education, so do better on mental ability tests, which are in any case heavily biased towards the middle classes; better-off children go on to get better-paid jobs. On this argument there is no true link between mental ability and work performance; psychological tests are merely class-laden rationing mechanisms. The social class argument is undermined by analyses that study the same group of people over time (Wilk and Sackett, 1996). Ability scores in 1980 predict whether people move up or down the occupational
0470861630_04_cha03.fm Page 59 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
59
ladder between 1982 and 1987; brighter persons move up into work of greater complexity, while less able persons move down into work of lesser complexity. This implies that the less able find they cope less well with complex work, and gravitate to work more within their intellectual grasp—something which would not happen if the test was just an arbitrary class-based way of keeping some people out of better jobs. Other research (Barrick et al., 1994) finds that lower mental ability goes with poorer job rating, which in turn goes with losing one’s job during downsizing.
Threshold or Linear? A widely held ‘commonsense’ view claims that, above a certain minimum level, most people are capable of most jobs. All that tests can accomplish is to screen out the unfortunate minority of incompetents. This view implies a threshold or step in the relation between test scores and job proficiency (Figure 3.3). An early study (Mls, 1935) found a clear break in truck driving proficiency at approximately IQ 80; any Czech soldier with an IQ over 80 was equally competent to drive a truck, while all those whose IQ fell below 80 were equally unfit to be trusted with an army vehicle. Linearity means work performance improves steadily as test score increases, with no step or threshold. The threshold versus linearity issue has important fair employment implications: • linearity implies candidates should be placed in strict order of test score, because the higher the test score, the better their job performance; • threshold implies all applicants within a broad band of scores are equally suitable, so the employer can select for diversity as well as for effective performance.
Performance
Linear
Threshold
Ability
Figure 3.3 Linear versus threshold models of mental ability and work performance
0470861630_04_cha03.fm Page 60 Thursday, December 2, 2004 8:31 PM
60
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Several large analyses of test data show test–performance relationships are generally linear. This means selecting for diversity and effectiveness may be incompatible (and implies that the Czech army truck drivers were atypical).
PROBLEMS IN THE USE OF MENTAL ABILITY TESTS Bias Item 10 in Figure 3.1 may be biased against people who earn too little to care how much tax their income attracts. Tests are usually checked for obvious sources of bias, such as gender, as part of the development process. Less obvious biases could still creep in, so it is always important to read a test carefully before using it, and consider whether any items might create a problem.
Recommendation: Before using a test, read it through carefully, and ask of every question whether it might create problems in your organisation or with the people you intend to assess. If possible, complete the test yourself.
Coaching Practice and coaching improve ability test scores by about seven or eight IQ points on the same test, and about three to four points on similar tests. This is quite enough to affect selection decisions if some candidates have had practice and coaching and some have not. Practice is a potential problem in graduate recruitment where applicants may be repeatedly tested. Large employers can avoid practice and coaching effects by using closed tests, ones they produce themselves and do not allow anyone else to use. The problem is far worse in the USA, where practice and coaching are provided on a commercial basis. One pessimistic psychologist says American employers should not use the same test more than once; if they do, ‘the only applicants who will not receive a near perfect score would be those who couldn’t afford the $1,000 . . . for a “test-preparation seminar”’ (Barrett, 1997). Apparently some US professionals will misuse their access to test material to sell test takers an unfair advantage. The same observer says even closed tests can be compromised; the first set of people to take the test conspire to reconstruct it by memorising items, then give or sell their reconstruction to subsequent intakes. European test users should be on the alert in case abuses like this cross the Atlantic.
Adverse Impact Selection tests that find large differences between key sections of the population can cause serious problems. A selection test that creates adverse impact requires careful justification, possibly in tribunal or court, which will cost the employer time and
0470861630_04_cha03.fm Page 61 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
61
money and may generate bad publicity. It is easier to use tests that do not find any differences between men and women, or whites and other ethnic groups. Differences in intellectual ability are much more controversial than differences in, for example, height. Gender differences are not generally a problem, although some research reports women score higher on verbal tests while men score higher on numerical. Age differences are probably not important within the 16–65 range. In the USA, quite large differences between white American and certain minority groups are found, which create major problems for employers. Ethnicity differences in ability tests remain a big unknown in the UK and most of the rest of Europe. Research in the Netherlands has found ethnicity differences in DAT scores, probably related to how long people have lived in the Netherlands and whether Dutch is their first language. All British and European employers would be wise to allow for the possibility of group differences. At the minimum they should keep careful records of test scores, and analyse them periodically to check whether gender or ethnicity differences are present. Recommendation: If possible, and when numbers permit, analyse your test data to see if any gender, ethnicity or disability differences are arising.
Differences between groups create major problems when using tests in selection, and many systems of using test scores have been proposed.
Simple Quota If 10% of the population are Fantasians, the employer reserves 10% of vacancies for them. This is direct discrimination, and so is illegal.
Top-Down Quota The top-down quota is a possible compromise. The employer decides what proportion of persons appointed shall come from a minority, then selects the best minority applicants even though their test scores may be lower than those of majority persons not appointed. This is effectively a formal quota for minorities, but one which selects the most able minority applicants.
Separate Norms The US Employment Service tried separate norms for white and minority Americans in the 1980s; a raw score of 300 on their test translated into a percentile of 45 for white Americans, compared with a percentile of 83 for African Americans. Separate norms have the advantage of avoiding setting a formal quota, which often proves a focus of discontent.
0470861630_04_cha03.fm Page 62 Thursday, December 2, 2004 8:31 PM
62
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Both top-down quota and separate norms represent a reasonable compromise between selecting the best applicants and ensuring a representative workforce. However, both were forbidden in the USA by the Civil Rights Act 1991. Neither system is formally prohibited in the UK but both could be viewed as direct discrimination, so are considered unsafe.
Score Bands Score banding means raw scores between, e.g. 25 and 30 are regarded as equivalent. Current banding systems are based on error of measurement. If a test has an error of measurement of, e.g. five raw score points, one can argue that all candidates within the five-point band should be regarded as equal. The employer can then give preference to minority persons within the five-point band without engaging in illegal reverse discrimination. Unfortunately, this will not look fair at all to the unsuccessful candidate who scores one point outside the band, and who does not differ reliably from most of those appointed, using exactly the same reasoning and calculation as are used to define the band.
TECHNICALITIES OF VALIDITY RESEARCH Validity research can be very misleading to the lay reader, because it sometimes seems to give depressingly poor results, suggesting tests do not predict work performance very well. This impression can be misleading, because we need to take account of the limitations of workplace selection research: • We only have successful candidates to research on. The technical term is restricted range. • We rarely have a good measure of work performance. The technical term is unreliable criterion. • We need large numbers for selection, which we rarely have. The technical term is sampling error.
Restricted Range We use the test to decide who to hire, so will tend not to hire low scorers. Suppose only the top 50% of test scorers are hired: the other 50% who are not hired cannot contribute data to the validity coefficient, because they did not get their job and we therefore have no data on their work performance. When, some time later, researchers attempt to associate high test scores with good job performance, it often comes as a surprise to HR managers that the correlation coefficient is very low. Figure 3.4 shows the effect on the validity coefficient; a clear relationship between test score and work performance is largely obscured. Validity should ideally be calculated from all applicants, but usually has to be estimated from successful applicants. Few organisations can afford to employ people judged unsuitable simply to allow psychologists to
0470861630_04_cha03.fm Page 63 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
63
Restriction of range to high test scorers
High No correlation 0.01 between performance and test scores
x x
x Performance at work
x x
x x x x x x x
Low
x
x x
x x x x
x
x
x
x
x x
x x x x x x x x x x
x x x
x x
x x
x x
x
Positive correlation with performance over whole range 0.05
Low
High Test scores
Figure 3.4 The effect of range restrictions in validity research
calculate better estimates of validity. Validity coefficients can be statistically corrected for restricted range, which usually increases them considerably.
Criterion Reliability Research on selection needs a criterion—a quantifiable index of successful work performance—which is difficult to find. Supervisor rating is the most widely used, but has fairly poor reliability; one supervisor’s opinion of John Smith will not agree all that well with another supervisor’s. If we cannot decide for sure who is and is not a good worker, it is obviously difficult to achieve highly accurate selection. An unreliable criterion necessarily reduces the correlation between assessment and criterion. Validity coefficients can be statistically corrected for unreliable criteria, which usually increases them considerably. Correcting for both restricted range and criterion reliability usually has the effect of doubling the size of the correlation and so the reported validity of the test, so if you are shown validity data it is a good idea to ask whether it has been ‘corrected’.
Sampling Error Calculating the relationship between test and work performance needs large numbers. Estimates based on small numbers can be misleading because a few unusual people can affect the results; ideally we should have data from 200 people. However, most employers do not have this many workers doing the same work. Work psychologists advise them to rely on evidence from large employers doing similar work, and not to rely on estimates from their own smaller work force, which could be misleading.
0470861630_04_cha03.fm Page 64 Thursday, December 2, 2004 8:31 PM
64
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Psychological tests are never perfectly reliable; if the same person does the test twice at an interval of three months, he/she will not give exactly the same answers and will not get exactly the same score.
CASE STUDY This case study considers an application of psychological testing in selection. The results of a set of ability scales are used, together with other information about the applicant from application form, CV, interview, references and a brief presentation.
Background to the Selection Exercise Newpharm, the pharmaceutical company based in the south east, needs to recruit a Learning Resources Officer, to promote and deliver learning and development among the 750 employees working at the production site, which also doubles as administrative and distribution headquarters. The HR Manager, John White, has prepared a person specification, shown below in the left-hand column. He next uses the person specification to construct a battery of tests and selection methods, shown in the right-hand column. Person specification
Assessment method
Physical make-up Presentable, confident appearance
Observe at interview
Education Graduate level
Job training CIPD graduate CPD in learning and development Experience 5–7 years in industrial, business learning development and training environment Ability General mental ability Verbal ability
Verify application form and CV Request certificate originals Reference check CIPD check
Reference check with last employer
Creativity
Advanced Raven Progressive Matrices Graduate Managerial Assessment—Verbal DAT language usage Guilford divergent thinking test
Special aptitudes IT skills Presentation skills Budgeting and cost analysis Driving
AC exercise on IT skills AC exercise: 10-minute presentation Graduate Managerial Assessment—Numerical Interview
[to be continued in the next chapter]
0470861630_04_cha03.fm Page 65 Thursday, December 2, 2004 8:31 PM
TESTS OF MENTAL ABILITY
65
The results are as follows: Newpharm Selection Centre for position of Learning Resources Officer Candidate: Alec Newman Assessment results Person specification
Assessment method
Result
Physical make-up Presentable Appearance
Interview
Appropriately dressed Confident approach
Education Graduate level
Application form
BA Business Studies MA in HRM
Job training CIPD graduate
CIPD check
Satisfactory
Experience 5–7 years
Reference check
Satisfactory
Ability General mental ability Advanced Raven Progressive Matrices Verbal ability Graduate Managerial Assessment—Verbal DAT language usage Creativity Guilford divergent thinking test Special aptitudes IT skills Presentation skills Budgeting and cost analysis Driving
AC exercise on IT skills AC exercise: presentation Graduate Managerial Assessment — Numerical Interview
95th percentile 70th percentile 80th percentile 60th percentile Rated 4 (out of 5) Rated 3 30th percentile Has clean licence
[to be continued in the next chapter]
The assessment data for Alec Newman indicate so far that he is generally a good match for the post of Learning Resources Officer at Newpharm, although his numerical ability, assessed to meet the budgeting and cost part of the person specification, is rather low.
0470861630_04_cha03.fm Page 66 Thursday, December 2, 2004 8:31 PM
0470861630_05_cha04.fm Page 67 Thursday, December 2, 2004 8:31 PM
CHAPTER 4
Personality Tests
OVERVIEW • There are various ways to assess personality, of which the questionnaire is the most popular. • There are various ways of writing personality questionnaires. • The ‘big five’ model describes five broad themes in personality. • Personality questionnaires have limited validity for predicting work performance. • Personality questionnaires do better at predicting other aspects of work behaviour, including effort, citizenship, leadership and career success. • Personality questionnaires are also used to screen applicants for dishonesty and other undesirable behaviours. • There is not enough research on team personality and team performance. • ‘Faking good’ is a problem with personality questionnaires, and various ways have been tried to solve the problem. • Gender differences are found in some questionnaire scores. • Some items in personality questionnaires may be considered intrusive.
INTRODUCTION ‘Personality test’ usually means questionnaire or inventory: a list of questions or statements, from 12 to 600 long. Questionnaires are very economical; instead of watching someone to see if he/she talks to strangers, which could take anything from 5 minutes to a day, we ask: ‘Are you afraid of talking to strangers?’, which takes about 15 seconds. In 15 minutes we can ask 100 questions, to 20 applicants at once. The questions can cover thoughts and feelings, as well as behaviour: ‘Do you often long for excitement?’ or ‘Do you sometimes worry about things you wish you had not done?’. This makes the questionnaire versatile as well as economical.
0470861630_05_cha04.fm Page 68 Thursday, December 2, 2004 8:31 PM
68
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Personality tests use ‘closed’ questions; the answer to ‘Do you mind being interrupted?’ has to be ‘yes’ or ‘no’, never ‘it all depends’. Personality inventories use a range of ways of phrasing their questions, various ways of interpreting the answers and two main ways of choosing the questions to ask.
Types of Questions Figure 4.1 shows the three forms of questions used in inventories: true/false (or yes/no), rating and forced choice, between equally attractive or unattractive answers.
True/false
1. I always arrive early for meetings.
true
false
2. I liked Gulliver’s Travels by Jonathan Swift.
true
don’t know
3. Do you ever tease animals?
yes
no
Forced choice
4. On your day off would you rather play football OR watch football? 5. Would you rather be Dr Crippen OR Jack the Ripper? 6. Which of these words do you prefer?
money
7. Circle the word MOST like you and Circle the word LEAST like you pushy ruthless well-liked smooth
Rating 8. I wait to find out what important people think before offering an opinion. never
occasionally
sometimes
usually
9. I like to spend my weekends with my family. very much 5 4 3 2 1
always
not at all
10. I feel tired after a day’s work. never
7
6
5
4
3
2
1
always
Figure 4.1 Three types of question in personality inventories
mystery
false
0470861630_05_cha04.fm Page 69 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
69
Interpreting the Answers Each question in the questionnaire is keyed to a personality trait. The answer true to question 1 in Figure 4.1 is keyed for conscientiousness. The first step therefore is to count how many ‘conscientious’ answers the person has given, which is the raw score. The second step is to compare the raw score with normative data. Scores on a personality inventory need to be related to a clearly defined group— people in general, bank managers, bus drivers, students etc. Good normative data are vital, and are the first key difference between a useful test and worthless imitations of a personality test. Most inventories offer some occupational data, usually for managerial or professional groups. Some inventories also have normative data based on large, representative cross-sections of the general population—16PF, Occupational Personality Questionnaires (OPQ) and California Psychological Inventory (CPI)—so they can make statements about, for example, how the person tested compares with British people in general. Various systems of interpreting raw scores are used, which have been described in Chapter 2. Some inventories such as CPI, use T scores; others such as 16PF and OPQ use sten scores.
Recommendation: Check the normative data for any inventory you might consider using. Is it relevant, up to date, and based on a large enough sample?
Choosing the Items It is very easy to assemble twenty questions, and call the result a leadership test; magazines are full of ‘tests’ like that. A proper personality test goes one crucial stage further: checking that the questions really assess leadership, and discarding the ones that do not. There are two main ways of doing this: • Empirical: the questions are included because they predict ability to lead. The inventory is empirically keyed using criterion groups of people of known characteristics. The CPI developed its Dominance scale from answers that differentiated college students nominated by their classmates as leaders from those seen as followers. Only questions that distinguish leaders from followers are kept in the final list. This empirical keying is done twice to make sure the questions in the finished test consistently distinguish leaders from followers. • Factorial: the questions have a common theme of ability to lead. Statistical analysis checks whether the items all relate to a common theme or factor. Questions that look as if they ought to assess leadership, but in fact do not, are discarded. Cattell’s 16PF research was one of the earliest uses of the factorial approach; Cattell (1965) found 16 factors or themes in his original set of questions.
0470861630_05_cha04.fm Page 70 Thursday, December 2, 2004 8:31 PM
70
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
In practice most tests use a mix of both approaches. Factorial approaches tend to be more popular because they are quicker and cheaper. However, there is no real substitute for showing that your leadership scale actually can distinguish leaders from followers.
Internal Reliability There is a third type of reliability, often calculated for personality questionnaires. Consider the statements in Table 4.1. Do they have anything in common? Do they all relate to one aspect of personality? No: they have been chosen precisely because they do not. There is no logical connection between liking taking charge of people and paying bills on time, and unlikely to be any in practice. There is unlikely to be any correlation between any of the items. The answer to one question will give us no clue to the likely answer to any of the others. Suppose we regarded the five statements as a scale that measured, for example, sales aptitude, scored the answers, and added them up to give a sales aptitude score. The resulting number would be completely meaningless. Internal reliability checks whether the items in a scale do hang together and relate to a common theme. All modern personality questionnaires have been checked for internal reliability. Home-made questionnaires may turn out to have very low internal reliability, meaning they are not really measuring much. Test publishers like internal reliability, and usually include it in their manuals, because it is easy to calculate, and because they do not need to find people prepared to complete the inventory twice.
THE BIG FIVE Different theories of personality have proposed different numbers of personality traits or factors. Hans Eysenck’s approach includes only two—extraversion and anxiety (he later added a third, ‘psychoticism’, to describe people who do not relate well to others and may be indifferent to others’ feelings)—while some versions of SHL’s OPQ assess as many as 32. Cattell is famous for his 16-factor model. Table 4.1 Set of personality inventory items and answers Statement
Answer
1. 2. 3. 4. 5.
True True True True True
I worry a lot about the future. I like taking charge of people. I like learning new facts. I like to be pleasant to people. I always pay my bills on time.
0470861630_05_cha04.fm Page 71 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
71
Table 4.2 The ‘big five’ model of personality Big five factor
Alternative names
Alternative names (opposite)
Neuroticism
Anxiety Emotionality Surgency Assertiveness Culture Intellect Likeability Friendly compliance Will to achieve Diligence
Emotional stability Emotional control Introversion
Extraversion Openness Agreeableness Conscientiousness
Dogmatism Closedness Antagonism Negligence
Recently the big five personality model (Table 4.2) has become very popular. Statistical analysis of personality data consistently finds five separate personality factors. A model that includes more than five factors will find that one or more of them overlap; thus most scales on the 16PF correlate with one or more other scales. The big five personality factors emerge from personality tests in many different cultures—the USA, the UK, Germany, the Netherlands, Israel, Russia, Japan and China—so the model has more promise of providing a culturally universal account of personality (although less developed countries are not well represented). The best known big-five questionnaire is the NEO.
Personality Profiles and Error of Difference Figure 4.2 shows a CPI profile. The profile presents all 20 scores in a way that shows whether each is above or below average, i.e. 50, and how far above or below, so it is easy to compare them at a glance. The score for Dominance is the highest, whereas the score for Socialisation is far below the average, indicating a person with a very low level of social maturity and integrity. The combination of someone very keen to exert influence on others but lacking any moral standards suggests someone liable to lapse into dishonesty. The profile belongs to the proprietor of a dating agency which was accused of taking money but failing to provide a worthwhile service. Score profiles are a neat, quick way of presenting the data, but may encourage over-interpretation—reading more into the difference between two scores than is justified. Personality tests are less reliable than ability tests; the difference between two scores is doubly unreliable, so the difference between two scores on a CPI or 16PF profile has to be quite large to merit interpretation (Figure 4.3). It would be a mistake to give someone career advice on the strength of the difference between two scores, if the difference was liable to disappear, or reverse itself, when the person did the test again.
0470861630_05_cha04.fm Page 72 Thursday, December 2, 2004 8:31 PM
72
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE 80
70
60
50
40
30
20 Do Cs Sy Sp Sa Wb Re So Sc To Gi Cm Ac Ai Ie Py Fx Fm Em In Do Cs Sy Sp Sa Wb Re So Sc To
Dominance Capacity for status Sociability Social presence Self-acceptance Well-being Responsibility Socialisation Self-control Tolerance
Gi Cm Ac Ai Ie Py Fx Fm Em In
Good impression Communality Achievement via conformance Achievement via independence Intellectual efficiency Psychological-mindedness Flexibility Femininity/masculinity Empathy Independence
Figure 4.2 CPI profile for ‘John Smith’
Taking 0.80 as a general estimate of CPI scale reliability gives a general estimate for error of difference for a pair of CPI scores of 6.3 T score points. This means that differences of more than 6 or 7 points will arise by chance in 1 in 3 comparisons, and differences of 12 or 13 points will arise by chance in 1 in 20 comparisons. If we are making an important decision based on the difference between two CPI scores, we might want a difference of 13 points or more. In Figure 4.2, ‘Mr Smith’s’ Dominance and Socialisation differ by much more than 13 points, so we can interpret the difference with confidence.
Figure 4.3 Error of difference
Recommendation: Be careful when comparing scores between personality traits, and check whether the difference is large enough to base decisions on.
USING INVENTORIES IN PERSONNEL SELECTION Opinion on the value of personality inventories in selection has shifted over time. Once they were widely dismissed as having little contribution to make. Critics
0470861630_05_cha04.fm Page 73 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
73
talked of a ‘0.30 barrier’, meaning tests never correlate with work performance better than 0.30, and so never predict more than a tenth of the variation of employee performance. Since about 1985 personality tests have grown steadily more popular, and more and more new measures have appeared. Does the evidence justify this new optimism? Inventories are used to answer four main questions: 1 2 3 4
Does the applicant have the right personality for the job? Will the applicant be able to do the job? How will the applicant behave in the workplace? Does the applicant have any problems that will interfere with work?
The first two questions look similar but are not the same; question 1 is answered by comparing bank managers with people in general, while question 2 is answered by comparing successful and less successful bank managers. Questions 1–3 are discussed in the following subsections; question 4 is discussed later, in the section on Personality Screening Tests.
Question 1: The Right Personality? Different sorts of British managers have different personalities, on average: R&D managers score higher on 16PF Conscientiousness, while finance managers score higher on 16PF Astuteness. American research finds similar differences with the Myers Briggs Type Indicator, especially Thinking/Feeling (Schneider et al., 1998). Some organisations attract, select or retain people who base decisions on logic (Thinkers), while others are largely staffed by people who base decisions more on their own or others’ feelings. Some employers seem to want a book of perfect personality profiles for managers, sales staff, engineers etc. Test manuals meet this demand to some extent, by giving normative data for different occupations. The perfect profile approach contains several pitfalls: • Most perfect profiles derive from people doing the job, taking no account of how well they do it. • A perfect profile may show how well people have adapted to the job’s demands, not how well people with that profile will do the job. • The perfect profile approach encourages cloning—selecting as managers only people who resemble as closely as possible existing managers. This may create great harmony and satisfaction within the organisation, but could make the organisation very vulnerable when faced with the need to change. Diversity of personality is often an advantage.
Recommendation: Do not assume that choosing new staff with the same personality as present staff is automatically a good idea.
0470861630_05_cha04.fm Page 74 Thursday, December 2, 2004 8:31 PM
74
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Interests and Values Career or vocational interests can be assessed by questionnaire. These are more frequently used in career counselling than in selection: the applicant assesses him/ herself to decide what jobs to apply for. Interest inventories may be less useful in selection, because they are easily faked. Extensive analysis of interest measures indicates there are six broad themes in peoples’ work interests (Table 4.3).
Question 2: Ability To Do the Job? The second, more important, question is whether inventories can select people who will do their job well. A very large number of researches have been carried out, for many jobs and many personality tests. Most recent reviews have employed the big five framework. Figure 4.4 shows that conscientiousness best predicts work performance in general. The other four of the big five—neuroticism, extraversion, openness and agreeableness—do not predict work performance to any significant extent. At first sight Figure 4.4 presents a disappointing picture. Even allowing for the limitations of selection research, personality tests cannot offer a better prediction of work performance than a correlation of 0.23. They cannot even reach the ‘0.30 barrier’, let alone break it.
Lighthouse Keepers and Holiday Reps Perhaps some jobs need extraverts, while others are better done by introverts, which we miss by pooling all types of work. We have no research on lighthouse keepers, but comparing five very broad classes of work reveals some differences:
Table 4.3 Six general themes in career interests Theme
Description
Typical careers
Realistic
Enjoys technical material and outdoor activities
Investigative
Interested in science and investigation
Artistic
Enjoys self-expression, enjoys music, drama or art
Social
Interested in helping others and work that involves other people Interested in power and political strength
Armed services Skilled trades Scientist Engineer Art teacher Music teacher Teacher Social worker Sales manager HR director Office worker Accountant
Enterprising Conventional
Likes to be well organised, happy with rules and regulations
0470861630_05_cha04.fm Page 75 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
75
(Low) Neuroticism
Factor
Extraversion Openness Agreeableness Conscientiousness 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Correlation with work performance
Figure 4.4 Personality and proficiency at work Source:
Data from Barrick et al. (2001).
• (Low) neuroticism gives a better prediction of success in police work and skilled/semi-skilled work. (Manual jobs are often stressful, a point managers do not always appreciate! This would explain why very anxious people do them less well.) • Extraversion gives a better prediction of success in management and police work. • Extraversion and conscientiousness predict sales performance. • Agreeableness and openness give a better prediction of success for customer service work. Agreeableness seems important for work where cooperation and teamwork are important (as opposed to work which simply involves interacting with clients, e.g. hotel work.) • While conscientiousness is generally desirable in employees, research suggests it may be a drawback in some jobs, encouraging indecision and bureaucracy. Some UK data on managers find high conscientiousness scores associated with lower ratings of promotability, confirming that high conscientiousness may not always be an asset in management. The evidence we have looked at so far indicates that personality questionnaires do not seem very good at answering the question ‘Will X be able to do this job well?’. Possibly questions about ability are better answered by ability tests.
Recommendation: Do not rely on personality questionnaires to select staff with the ability to do the job well.
Question 3: Behaviour at Work There may be other aspects of work performance where personality is more relevant, and which are just as important to the organisation as proficiency. These include:
0470861630_05_cha04.fm Page 76 Thursday, December 2, 2004 8:31 PM
76
• • • • •
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
leadership dependability citizenship absence training performance.
Leadership Can personality tests help select good leaders? Reviews of research find modest relationships for (low) neuroticism and extraversion. In civilian management, openness is linked to effective leadership, while in military and government, conscientiousness is linked (Figure 4.5). This makes sense. In business a leader has to be open to new ideas, while in the government and military it may be more important to follow rules and procedures carefully. Other research has linked the big five to transformational or charismatic leadership. (Transformational leaders inspire their followers and create a vision for them to follow. This can be very important in some sorts of managers.) Figure 4.6 shows that transformational leaders are higher on extraversion, openness and agreeableness, but not (low) neuroticism and conscientiousness. The striking difference is on agreeableness: conventional leaders do not have to be likeable, but transformational leaders do need to be able to win trust and make people like them.
Business
Govt/military
(Low) Neuroticism
Factor
Extraversion Openness Agreeableness Conscientiousness 0
0.1
0.2
0.3
0.4
0.5
0.6
Correlation with leadership
Figure 4.5 Personality and leadership in business and in government/military Source:
Data from Judge and Ilies (2002).
0.7
0470861630_05_cha04.fm Page 77 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
77
(Low) Neuroticism Extraversion Factor
Openness Agreeableness Conscientiousness
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Correlation with transformational leadership
Figure 4.6 Personality and transformational leadership Source:
Data from Judge and Bono (2000).
Dependability Dependability has several strands: • Commendable behaviour is defined by letters of recommendation, letters of reprimand, disciplinary actions, demotions, involuntary terminations, and ratings of effort and hard work. • Non-delinquency means avoiding actual theft, conviction or imprisonment. • Non-substance abuse means not abusing alcohol and drugs. Dependability is sometimes called (absence of) counterproductive behaviour. Personality questionnaires predict commendable behaviour and non-substance abuse fairly well (correlations of 0.20–0.39), and non-delinquency very well (correlations up to 0.52). As might be expected, it is the conscientiousness factor in personality questionnaires that best predicts dependability at work. These results indicate that personality inventories can be genuinely useful in selection.
Citizenship Organisational citizenship means volunteering to do things not in the job description, helping others, following rules willingly and publicly supporting the organisation— all highly desirable behaviour in employees. In fact most organisations probably could not function at all without ‘citizenship’; industrial action such as ‘working to rule’ often demonstrates this. Citizenship shows modest correlations with extraversion, agreeableness and conscientiousness, giving the employer some indication of how to select better ‘citizens’.
0470861630_05_cha04.fm Page 78 Thursday, December 2, 2004 8:31 PM
78
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Absence Absence is a major problem for many employers, so the possibility of selecting applicants less likely to be absent is attractive. Research on university staff finds one ‘obvious’ result—conscientious employees were absent less, showing their ‘dutiful, rule-bound and reliable nature’—and one less obvious result—extraverts were absent more, consistent with their ‘carefree, excitement seeking, hedonistic nature’ (Judge et al., 1997). The relationship between absence and conscientiousness is small, but potentially valuable in the longer term; the employer who reduces absence by 5% by selecting highly conscientious employees could achieve major savings across a large work force.
Training Performance The openness personality factor is linked to training performance, which makes sense: open-minded people welcome the opportunity to gain new knowledge and skills. Extraversion also predicts rated training performance, which is more surprising. Perhaps extraverts like new experiences, or meeting new people, and are better able to profit from them.
Recommendation: Check the validity data for any inventory you might consider using. Is there any? What has the inventory succeeded in predicting in the way of workplace behaviour?
Can Do versus Will Do? Much of the research on personality and work can be summed up by the can do/will do distinction. Conventional measures of work proficiency emphasise the can-do side of work. This is not well predicted by personality questionnaires. Reliability, citizenship and absence, on the other hand, are will-do aspects: how motivated employees are to try hard, follow rules and systems, and simply to be there. These aspects of work behaviour are better predicted by personality questionnaires. The can-do/will-do distinction is best illustrated by the US Army’s massive testing programme (Table 4.4). Their personality questionnaire, called ABLE, predicts Table 4.4 ‘Can-do’ and ‘will-do’ at work: personality, and mental ability ABLE (personality test) ‘Will-do’ Effort and leadership Personal discipline Physical fitness and military bearing ‘Can-do’ Technical proficiency General soldiering proficiency
ASVAB (ability test)
✓ ✓ ✓ ✓ ✓
0470861630_05_cha04.fm Page 79 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
79
three will-do criteria—effort and leadership, personal discipline, physical fitness and military bearing—but not the two can-do criteria—technical proficiency and general soldiering proficiency. The two aspects of proficiency are better predicted by the Army’s ability test (Armed Services Vocational Aptitude Battery).
Recommendation: Consider using personality tests to select for motivational ‘will-do’ aspects of behaviour at work.
PERSONALITY AND TEAMWORK While there are hundreds of studies linking personality to work performance in individuals, there is a serious lack of research that has looked at teams. We have only two pieces of research on team personality and performance, one in HRM teams in an American department store chain (Neuman and Wright, 1999), and one in assembly workers (Barrick et al., 1998). These studies relate team personality to team effectiveness. Team personality can be defined simply as the average, e.g. the average of the conscientiousness scores of the individual team members. At the average personality level, team performance relates most strongly to team agreeableness, then to team conscientiousness. Note that team agreeableness is linked to team performance, even though agreeableness is not related to work performance at the individual level. In assembly work average (low) neuroticism also goes with team effectiveness (Figure 4.7). However, there is more to team personality than just its average. An average level of agreeableness could mean that everyone is averagely agreeable, or could conceal a mixture of very agreeable and very disagreeable people. We need to look at variation within the team, and especially at very high or low scores. In assembly work, teams
HR
Assembly
(Low) Neuroticism
Factor
Extraversion Openness Agreeableness Conscientiousness 0
0.1
0.2
0.3
0.4
0.5
0.6
Correlation with team performance
Figure 4.7 Personality and team performance in HR and assembly work Source:
Data from Barrick et al. (1998).
0.7
0470861630_05_cha04.fm Page 80 Thursday, December 2, 2004 8:31 PM
80
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
with no low scorers on extraversion, conscientiousness or agreeableness performed better. One disagreeable person in the team reduces its effectiveness, as does one introvert or one person low in conscientiousness. Where this happens, testing personality may be able to make a far bigger contribution to productivity. Work psychologists now have a full agenda of further research, to find out where extreme scores matter, and why. Another approach to teamwork argues that certain combinations of personalities create a more effective team. The Belbin Team Role Self Perception Test assigns people to one of eight team roles, e.g. shaper or company worker, and argues that an effective team needs the right mix of roles. If everyone wants to take charge and no one wants to be a ‘company worker’, the team will fail. Preliminary research has shown that balanced teams are more effective.
Recommendation: Where teamwork is important, consider the personality of the team as a whole.
CAREER SUCCESS Some research has linked the big five to career success in managers. Career success is defined by salary level and by status—the number of rungs of the corporate ladder between you and the Managing Director. Career success reflects performance over a much longer time span than conventional indices of job proficiency. A large survey in America and Europe finds some differences between the two sides of the Atlantic (Boudreau et al., 2000). Career success in the USA is linked to low neuroticism and low agreeableness, but not to conscientiousness. In Europe, however, high extraversion and low agreeableness go with career success, but again not with conscientiousness (even though conscientiousness is the best predictor of work performance overall) (Figure 4.8). USA
Europe
(Low) Neuroticism
Factor
Extraversion Openness Agreeableness Conscientiousness –0.5
–0.3
–0.1
0.1
0.3
Correlation with career success
Figure 4.8 Personality and career success in Europe and the USA Source:
Data from Judge et al. (1999), Boudreau et al. (2000).
0.5
0470861630_05_cha04.fm Page 81 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
81
The most convincing study is a long-term follow-up in California (Judge etal., 1999). Personality was assessed in childhood and adolescence, then related to occupational status and income when the people had reached their fifties. Conscientious and extravert people were more successful, while anxious people tended to be less successful. The four factors combined correlated 0.54 with career success, suggesting personality is definitely important for success in work. All the research finds that less agreeable people were more successful, suggesting perhaps that it does not pay to be too nice to others if you want to get ahead. Openness seems completely unrelated to career success. The studies differed on the role of conscientiousness in career success; the study of mature managers found no link, but the long-term follow-up found a strong link. Perhaps conscientiousness is important at earlier stages of a manager’s career.
INCREMENTAL VALIDITY Two analyses suggest personality inventories will have considerable incremental validity over mental ability tests (Salgado, 1998; Schmidt and Hunter, 1998). (Recall that incremental validity means that a second test improves on the prediction made by a first test, which will not happen if both tests measure the same thing.) Both conscientiousness and mental ability are related to work performance, but not to each other, so it follows logically that conscientiousness will have incremental validity on mental ability (and vice versa). Conscientiousness should increase validity by around 18%, making the combination of conscientiousness and mental ability one of the more useful. Review of European data confirms this, and concludes that neuroticism too will have incremental validity.
Recommendation: Try to check the incremental validity data for any inventory you might consider using. Will the inventory add anything to your other selection methods?
PERSONALITY SCREENING TESTS Question 4: Does the Applicant Have Any Problems that Will Interfere with Work? Personality questionnaires can be used like the driving test: not to select the best, but to exclude the unsuitable: • In the USA personality inventories are widely used to screen out candidates who are psychologically unfit for police work. • Many American employers try to screen employees for dishonesty. • Many American employers also try to screen employees for violent or criminal tendencies in case they cause harm to the organisation’s customers.
0470861630_05_cha04.fm Page 82 Thursday, December 2, 2004 8:31 PM
82
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• American employers once tried to use personality tests to screen out union activitists—‘thugs and agitators’—working on the curious assumption that joining a union was a sign of poor adjustment. • British Government reports, the Walner Report and the Waterhouse Report, have recommended psychological testing for anyone whose work brings them into contact with children, to identify possible child abusers. Personality questionnaires cannot detect sexual interest in children specifically, but could screen out poorly adjusted persons who may be less able to resist deviant impulses.
Honesty Testing Surveys indicate that employee theft is a widespread problem, which may contribute to 30% of company failures in the USA. Questionnaire-format honesty tests have become popular in the USA in recent years, to try to screen out potential thieves, especially since use of the polygraph or lie detector was restricted. A massive analysis of 665 honesty test studies in the North America, covering over 500 000 persons, produced some surprising results: • Honesty tests work: they predict dishonesty fairly well (a correlation of 0.39). • Honesty tests also predict general job proficiency fairly well. • Mainstream psychological tests are no more successful than specialist honesty tests. These results are disturbing for professional psychologists, who have been warning the public for years that writing psychological tests is a specialised task best left to the experts, i.e. psychologists. • Honesty tests work for all levels of employee, not just for lower paid workers, as is sometimes supposed. Critics express some cautions about honesty testing: • Only a few tests have been researched thoroughly; the rest remain an unknown quantity. • Many studies are carried out by the test’s publisher, which tends to carry less weight than work by independent researchers. • Most studies use definitions of dishonesty that could be questioned. Many define theft by self-reports of past theft, which supposes that thieves will be honest about stealing. Other infer theft from stock ‘shrinkage’ or lower than expected takings. Only seven studies, with a total of 2500 persons, used actual theft as the outcome predicted, and these achieved much poorer results. (Honesty testing raises some interesting questions. Is detected theft really the ideal criterion? One could argue that only incompetent thieves are found out, and that what employers really want is a test that will identify the clever thieves who do not get caught.)
0470861630_05_cha04.fm Page 83 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
83
Problems with Screening False Positives Honesty testing generates a high rate of false positives: people identified as dishonest by the test, who do not commit any dishonest acts (Figure 4.9). Screening’s other failure is the false negative—dishonest people whom the test fails to identify. Some critics think false positives present a strong argument for not using honesty tests, because a false positive means an honest person is wrongly accused of dishonesty. False positives will always happen because no test will ever be perfectly accurate. Other experts argue that the false positive issue is irrelevant; if the test works at all, then using it will benefit the organisation by excluding some dishonest persons. Not screening implies accepting more dishonest persons, and so excluding some honest individuals who might otherwise have got the job. On this argument, screening by and large benefits honest applicants. Other critics argue that honesty tests may be unfair to certain groups of people. A reformed criminal who admits to past dishonesty is likely to fail an honesty test and be deprived of employment. Honesty tests that reject people who look ‘too good to be true’ may unfairly reject people with very high moral and/or religious standards.
Recommendation: Think carefully before introducing honesty tests. Examine the evidence that the test can detect dishonest applicants. Ask who will ‘fail’ the test, and why, and what they will think of your organisation.
Person Dishonest
Honest
True positive
False positive
Honest
False negative
True negative
Test result
Dishonest
Figure 4.9 The four possible outcomes of a screening test
0470861630_05_cha04.fm Page 84 Thursday, December 2, 2004 8:31 PM
84
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
THE PROBLEM OF FAKING Personality inventories are not tests but self-reports; a test has right and wrong answers but a self-report does not. Many inventories look fairly transparent to critics, who argue no one applying for a sales job is likely to say true to I don’t much like talking to strangers. Personality inventories are fakable, in the sense that people directed to maximise their chances of getting a job do generate much ‘better’ profiles. These are laboratory experiments using students: is faking good a problem in real-life selection? Early research—widely quoted—thought not: only a minority of real applicants fake good, and even they faked less than people in laboratory directed faking research (Dunnette et al., 1962). These comforting conclusions were based on very few studies, with very small samples, which identifies this as an under-researched issue. There are many lines of defence against faking. Some try to modify the way the test is administered: • Rapport. The tester tries to persuade applicants it is not in their interests to fake, because getting a job for which one’s personality is not really suited may ultimately result in unhappiness, failure etc. This argument may not have much impact on applicants who are desperate to get any job, ideal or otherwise. • Faking verboten. At the other extreme, military testers have sometimes warned people that faking will be severely punished. • Faking will be found out. Faking can be reduced by warning people that it will be detected, and may prevent them getting the job. This approach could create problems if used in routine selection. • Faking will be challenged. When applicants were told they would be required to defend their answers in a subsequent interview, faking was dramatically reduced. If the selection really includes such an interview, no unethical deception is involved. • Administer by computer. People are more honest about their health when giving information to a computer, not a person. However computer administration of personality tests does not reduce faking good. Other approaches alter the way the test is written.
Subtle Questions ‘Do you have confidence in your own abilities?’ clearly seems to suggest a ‘right’ answer when selecting managers. But what is assessed by question 2 in Figure 4.10, about conserving energy supplies? And what is the ‘right’ answer to give? Some test authors argue that you cannot work out what their items are assessing, and that even when you are told you cannot work out what answer gets a ‘better’ score. The energy conservation question is part of a dominance scale, but what is the ‘dominant’ answer? Critics say inventory questions can be divided into the unsubtle that work, and the subtle that do not.
0470861630_05_cha04.fm Page 85 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
85
Forced Choice Applicants must choose between pairs of equally flattering or unflattering statements (questions 3 and 4 in Figure 4.10). Some recent research suggests forced choice may solve the faking good problem (Martin et al., 2002). For example students can fake a manager profile on the rating form of SHL’s OPQ, but not on the forcedchoice OPQ. However, forced choice often is not suitable for selection tests. Within Subtle questions
1. I sometimes have trouble remembering where I put things. 2. We should conserve energy supplies so there will enough left for future generations.
Forced-choice questions
3. Which are you better at doing: meeting new clients OR generating new ideas? 4. Which would you rather do: tell someone they are being terminated OR fill in your income tax return?
Lie scale questions
5. I have never taken anything, however small or unimportant, that did not belong to me. 6. I never discuss other people behind their backs.
Defensiveness scale questions
7. I sometimes have strange or peculiar thoughts. 8. There are days when I just can’t get started.
Right/wrong answer questions
9. Which of the following is not a type of firearm: Gatling
Sterling
Gresley
Enfield
FN.
10. What is the likelihood of catching rabies in Britain? 1 in 10000000
1 in 100000
1 in 1000
1 in 100
Figure 4.10 Types of questions used to try to deal with faking good in personality questionnaires
0470861630_05_cha04.fm Page 86 Thursday, December 2, 2004 8:31 PM
86
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
each forced choice, one answer, for example, scores for extraversion, while the other scores for anxiety. This way of scoring means it is impossible to get all very high scores or all very low scores (whereas this is entirely possible on the 16PF, for example). With forced choice, the person who wants to express a strong preference for one trait must express less interest in another. Forced-choice measures cannot usually conclude that person A is more interested in dominating others than is person B—but this is the comparison selectors usually want to make. The typical forced-choice measure can conclude that a person is more interested in dominating others than in helping others, which is useful in career or development counselling, but not in selection.
Recommendation: Look carefully at forced-choice questionnaires and check whether the way they are scored allows you to compare one person with another.
Control keys Many inventories contain lie scales or faking good scales, more politely known as social desirability scales (questions 5 and 6 in Figure 4.10). Lie scales are lists of questions that allow people to deny common faults or claim uncommon virtues. A high score alerts test users to the possibility of less than total candour, but then presents them with a difficult decision: to discard the applicant altogether or to rely on other evidence. Discarding applicants is not always possible in practice. Coaching in how to complete personality tests, which is widely sold in the USA, helps people avoid being detected by faking good scales.
Recommendation: Check whether the questionnaire has a social desirability or lie scale. Do not use a questionnaire for selection unless it has one.
Correction for Defensiveness Some questionnaires use special scales to try to correct the rest of the person’s profile. The MMPI’s K key assesses defensiveness, then adds varying proportions of the K score to other scores to estimate the shortcomings people would have admitted if they had been more frank. A critic noted that ‘if the subject lies to the tester there is no way to convert the lies into truth’.
Change the Questionnaire into a Real Test Cattell’s Motivation Analysis Test uses several novel question formats, which are probably quite hard to fake. The Information subtest (question 9 in Figure 4.10)
0470861630_05_cha04.fm Page 87 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
87
consists of factual questions, with right and wrong answers, on the principle that people know more about things that matter to them; an aggressive person may know more about firearms. The Estimates subtest (question 10 in Figure 4.10) assumes that people’s needs and motives shape their perceptions, so a fearful person may overestimate the risk of disease.
Does Faking Affect Validity? It seems intuitively obvious that untrue answers will not predict work performance, but it has been claimed that faking does not reduce validity. However, recent studies cast doubt on this comforting conclusion (Ellingson et al., 1999; Hough, 1998). • Some applicants fake a little, some fake a lot, and some not at all, so faking good is not a simple constant error that can be subtracted to reveal the true profile. • Where only a small proportion of applicants are appointed, they tend to be the ones who faked good most. So knowing that most applicants tell the truth is not very comforting.
Can Correction Keys Restore Validity? One researcher obtained both faked and unfaked personality questionnaires from the same people, so could answer the question: do correction keys succeed in turning lies back into truth? At one level the answer was ‘yes’. Faked tests were ‘better’ on average than unfaked, but the corrected faked ones were no ‘better’ on average than the unfaked. At the individual level, however, the answer was ‘no’. Faking changed the rank order of candidates (compared with a rank order based on unfaked completion), and so changed who got the job. Correcting for faking did not change the rank order back to the unfaked order, so did not ensure the ‘right’ people were appointed after all.
Questionnaire Scores as Hypothesis The test user should regard the scores the questionnaire generates as nothing more than hypotheses, that need further information to confirm them: • In counselling or development, the hypothesis that Smith would make a good line manager should be tested against Smith’s own view, and against Smith’s own account of his/her success at managing people or projects in the past. • In selection, the risk of faking good makes it especially important to seek out confirmation of, for example, whether the person is original and flexible, from past achievements, from what referees say, or from what Smith does in the group discussion or the written exercise of an assessment centre.
0470861630_05_cha04.fm Page 88 Thursday, December 2, 2004 8:31 PM
88
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Recommendation: Never rely on inventory scores alone to make important personnel decisions.
INVENTORIES AND THE LAW Personality inventories have encountered surprisingly little trouble with the law, at least compared with ability tests.
Gender Differences Early versions of the Strong Interest Inventory had separate male and female question books, appropriately printed on pink or blue paper. Two American Civil Rights Acts later, gender differences still create potential problems for the occupational tester. There are gender differences in many inventory scores. Men score higher on scales assessing forcefulness, while women score higher on scales assessing responsibility. (Note that we say differences in inventory scores. Whether men ‘really’ are more forceful and women ‘really’ more responsible is another question, well beyond the scope of this book. Inventories may reflect differences in how men and women see themselves. For example, it is commonly argued that men are more reluctant to admit weaknesses, so try to appear tougher.) Because gender differences are found on many scales, most questionnaires used to provide separate normative data for male and female. Today most questionnaires also provide pooled gender norms, which do not distinguish male from female. Gender differences create a dilemma for the occupational tester. Using separate norms for men and women is illegal in the USA, and thought inadvisable in Britain, because calculating scores differently for men and women could be classed as direct discrimination. On the other hand, using pooled gender norms may create adverse impact. For example, using a cut-off score of 60 on CPI Dominance to select managers would exclude more women than men. On this analysis, there is no safe way to use personality tests in selection. In practice there have not been many complaints about gender and personality tests, because the male–female differences are relatively small, and because personality tests are not usually used with mechanistic cut-offs. It is in any case not good practice to base selection decisions on test data alone.
Ethnicity Analyses of differences between ethnic groups in the USA suggest personality tests will not create the major adverse impact problems found with tests of mental ability. There do not seem to be large consistent differences between white and other Americans in personality questionnaire scores. Evidence for Europe is less definite.
0470861630_05_cha04.fm Page 89 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
89
Quite large differences in neuroticism and extraversion are found between some immigrant groups in the Netherlands and the native Dutch. British research finds few differences between white and Afro-Caribbean, while Asian and Chinese differences generally ‘favoured’ the minorities, showing them to score as more conscientious than white British (and so more likely to be selected). However, this research used college students, who may not be typical of people in general.
Recommendation: Be prepared for the possibility of gender and ethnicity differences. Check your data for gender and ethnicity differences as soon as you have enough.
The Potential Danger of Multi-Score Inventories Most questionnaires come as fixed packages. The 16PF assesses all 16 factors every time it is used; the test user cannot decide to use only dominance, ego strength and shrewdness, or to leave out suspicion and guilt proneness. This creates a big danger for selectors. Suppose your job analysis has identified dominance, ego strength and shrewdness as the (only) personality characteristics needed for the job. The other 13 factors on 16PF are not job related, and should not be used to make the decision. But having the other 13 scores in front of them, the selection team may well use them anyway, which could create serious problems for the employer. Someone complains about the selection, and finds that scores were used which were not job related; the employer cannot justify their use. Inventories that describe people’s weakness and poor adjustment can be particularly dangerous: someone applying for a clerical job will be justifiably annoyed to find their level of fantasy aggression or self-doubt has been assessed.
Recommendation: Do not assess aspects of personality that you do not need to (and rely on a good job analysis to tell you if you need to).
OTHER PROBLEMS WITH PERSONALITY INVENTORIES Lack of Insight The person completing the questionnaire may describe him/herself as very dominant, when by any other evidence he/she is rather ineffectual, not because he/she is consciously faking good, but because he/she really sees him/herself that way. We have no systematic data on how many people lack self-insight in this way.
0470861630_05_cha04.fm Page 90 Thursday, December 2, 2004 8:31 PM
90
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Providing a Work Context? Inventories usually assess personality in general, which may be broader than is needed in selection. We do not need to know what someone is like at home, and it might be considered an intrusion to ask. An experimental version of the NEO inventory provides a context by inserting ‘at work’ into every item: ‘I take great care with detail at work’. This should make the test more accurate for selection. A pilot study on airline customer service supervisors showed that the ‘contextualised’ NEO did give a better prediction of work performance (Robie et al., 2000). However, setting NEO in a work context also resulted in higher scores—perhaps because people really are more careful at work—so new normative data will be needed or we risk overestimating conscientiousness.
Invasion of Privacy By their very nature, personality tests tend to invade privacy; it is difficult to assess personality without asking personal questions. In the Soroka v. Dayton-Hudson case the applicant claimed the combined MMPI and CPI inventory asked intrusive questions about politics and religion, contrary to the Constitution of the State of California; the case was settled out of court so no definitive ruling emerged. Another Californian case, Staples et al. v. Rent-a-Center, also complained of intrusive questions in the MMPI, and about computer generated reports that made ‘gross and unfounded generalisations’, e.g. ‘[the candidate] tends to be restless and impatient, should reduce caffeine and nicotine consumption and drink more water’. Employers need to read thoroughly any questionnaire they plan to use, and ask themselves ‘Would I be unhappy about answering any of these questions?’.
Recommendation: Before you use any personality test, read it all the way through, and check for items that might offend anyone.
The ‘Method Variance’ Problem Table 4.5 shows the results of assessing people with the 16PF, by conventional questionnaire and by ratings by others who know them well. The results are fairly worrying: • Warmth assessed by questionnaire and by other’s opinion should agree, but correlate only 0.11—hardly at all. The correlations in bold in Table 4.5 are for the same trait measured two ways; some are quite low. • Liveliness and vigilance assessed by self-report should correlate less well because they are different traits, but Table 4.5 shows they correlate very well— 0.79. The correlations in italic in Table 4.5 are for different traits measured by questionnaire; some are quite high.
0470861630_05_cha04.fm Page 91 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
91
Table 4.5 Data for four personality factors, assessed by questionnaire and rating Questionnaire A
F
H
Questionnaire A—warmth F—liveliness H—social boldness L—vigilance
0.44 0.43 0.40
0.78 0.79
0.71
Rating A—warmth F—liveliness H—social boldness L—vigilance
0.11 0.18 0.16 0.27
0.26 0.49 0.60 0.30
0.28 0.53 0.59 0.19
Source:
Rating L
A
F
H
0.29 0.29 0.52 0.25
0.14 0.40 0.25
0.49 0.03
0.29
L
Data from Becker (1960).
Questionnaire measurements of different traits often correlate very highly; similarly ratings of different traits often correlate very highly. This effect—called method variance—means that assessors should ideally measure every trait by two different types of measure: multi-trait multi-method measurement. This is not always easy in practice, and does not always seem to work, as research on assessment centres (Chapter 8) has found.
SURVEY OF (SOME) INVENTORIES Inventories are listed and reviewed in the US Mental Measurements Yearbooks (MMY). MMY reviews discuss size and adequacy of standardisation sample, reliability, validity and whether the manual is adequate. MMY reviews are very critical even of major tests. The housing sector has recently published a guide to tests suitable for selection in the UK (Howard, 2003). • 16 PF. A well-known test that assesses 16 personality factors, the 16PF does not fit within the ‘big five’ framework. The current edition of the 16PF—16PF5— has good internal and re-test reliability, as well as good US and UK normative data. 16PF5 is not all that similar to earlier versions of 16PF, and should not be regarded as interchangeable with them. • California Psychological Inventory (CPI). The CPI assesses 20 main personality traits. The CPI is the only major inventory in current use that relies primarily on empirical keying rather than factor analysis. The CPI does not fit within the ‘big five’ framework. The CPI has recent British normative data. • Occupational Personality Questionnaires (OPQ). A family of inventories, varying in length and format. The longest versions (OPQ32) measure 32 aspects of personality; the shorter Factor, Octagon and Pentagon versions measure 14, 8 and 5 aspects, respectively. OPQ uses three formats: endorsement, forced choice and five-point rating. The OPQ has good British normative data.
0470861630_05_cha04.fm Page 92 Thursday, December 2, 2004 8:31 PM
92
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• NEO. The short NEO-FFI measures the ‘big five’, using 12 items per factor with a five-point rating format. The longer NEO-PI also measures 6 facets within each main factor, and contains 240 items. The NEO lacks any social desirability scale, and works on the assumption that people will tell the truth about themselves; they may if they are seeking self-enlightenment, but may not if they are seeking employment. • Hogan Personality Inventory. The HPI has 206 endorsement format items. The HPI uses an extended big-five model that divides extraversion and openness into two factors each. • Global Personality Inventory. The GPI measures the big five and 32 facets. GPI was simultaneously written in 14 countries—the USA, the UK, France, Belgium, Sweden, Germany, Spain, Norway, China, Japan, Singapore, Korea, Argentina and Colombia—to ensure item content was truly pan-cultural. The detailed development was performed in parallel in English, Spanish and Chinese. • Eysenck Personality Questionnaire. Hans Eysenck developed the Eysenck Personality Scales over 40 years. The research started in 1952 when he worked at the Maudsley Hospital and developed the Maudsley Medical Questionnaire. This was a 40-item measure of N (neuroticism or emotionality). Other scales were added over the 40 years, E (extraversion–introversion), L (the ‘lie’ scale to measure dissimulation, which we have called social desirability, in line with other instruments), P (psychoticism, more suitably renamed tough-mindedness), empathy, impulsiveness and venturesome. We have designed an interpretive proforma for occupational application, which covers all seven scales and includes the ‘lie’ scale as a measure of social desirability. • Goldberg’s item bank. Lewis Goldberg offers a bank of personality questionnaire items that can be combined in different sets to assess the big five or to provide analogues to many popular inventories. Goldberg’s item bank has an unusual feature: it is in the public domain, and can be used without restriction or payment by anyone. An item bank enables the selector to assess only those attributes identified as relevant by job analysis. Goldberg’s item bank is accessible through his IPIP website, the address of which is given on p. 95. As Goldberg notes, people should not use personality questionnaires unless they are competent to do so.
ALTERNATIVES TO THE INVENTORY The personality inventory is quick and (usually) fairly cheap, but has problems with faking, and with method variance. What other methods of assessing personality can the selector use? Other formal personality tests include projective tests, laboratory tests and archive data.
Projective Tests What are X’s motives, complexes and defences? Projective tests assume that people project their personality into what they see in drawings or inkblots, or how they complete stories. Projective tests are supposed to get around people’s defences, and
0470861630_05_cha04.fm Page 93 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
93
Figure 4.11 Drawing similar to those used in the Thematic Apperception Test
so prevent faking. The Thematic Apperception Test (TAT) is a set of pictures carefully chosen both for their suggestive content and their vagueness (Figure 4.11). The applicant describes ‘what led up to the event shown in the picture, what is happening, what the characters are thinking and feeling, and what the outcome will be’, and projects into the story his/her own ‘dominant drives, emotions, sentiments, complexes and conflicts’ (McClelland, 1971). The TAT is intended to assess motives, such as the need to achieve, or the need for other peoples’ approval and presence. The Rorschach test uses selected inkblots (Figure 4.12), which people can
Figure 4.12 Inkblot similar to those used in the Rorschach test
0470861630_05_cha04.fm Page 94 Thursday, December 2, 2004 8:31 PM
94
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
interpret in psychologically meaningful ways. Projective tests require considerable skill and experience to use and score. Neither TAT nor Rorschach is very successful as a selection test. The Defence Mechanism Test (DMT), devised by Swedish psychologists, also uses ambiguous drawings containing threatening images, and makes them more ambiguous still by showing them for only a split second. The DMT is scored for a whole range of defence mechanisms, such as denial (failing to see the threatening image) or repression (seeing the threat as something non-threatening). This test has proved quite successful in selecting military pilots, perhaps because people whose minds are full of conflict and defence cannot think fast enough to fly military aircraft.
Laboratory Tests How does X react physically to threat? ‘I shall count to ten. When I get to ten you will receive a painful electric shock.’ Most people start getting physically anxious by about number four; psychopaths do not react much, if at all (Figure 4.13). These reactions are easy to measure and hard to fake, so employers could use the threatened shock test to screen out psychopaths, who are likely to steal, attack people verbally or physically, break rules etc. This test clearly is not a practical proposition because threatening applicants with electric shocks is completely unacceptable. A less obvious problem is a high false positive rate. The shock test picks out psychopaths quite well, but also wrongly identifies as psychopaths many people who are not psychopathic.
Archive Data What do the files say about X? When deciding whether it is safe to release violent offenders from prison, information from school records about disruptive behaviour
Psychopaths
Non-psychopaths
16 14 12 10 8 6 4 2 0
1
2
3
4
5
6
7
8
9
10
Figure 4.13 Physical reactions of psychopaths and non-psychopaths to the threat of an electric shock
0470861630_05_cha04.fm Page 95 Thursday, December 2, 2004 8:31 PM
PERSONALITY TESTS
95
in childhood helps make accurate predictions, and is quite unfakable. Most prospective employers would probably find some useful information in previous employers’ files on candidates: attendance, sickness, performance appraisals etc. However, concerns over privacy make it unlikely this sort of information could be used in selection decisions. Personality can also be assessed by selection methods reviewed in other chapters: • • • •
Interview: how does X perform in interview? (Chapters 9 and 10). Peer ratings and references: what do other people think of X? (Chapter 6). Biodata: is X the sort of person who can sell life insurance? (Chapter 5). Situational/behavioural tests: can X exert influence in a group discussion? Assessment centres use group discussions and role plays to assess how people relate to each other, perform in groups etc. (Chapter 8).
USEFUL WEBSITES http://ipip.ori.org/ipip Goldberg’s personality item bank. www.reidlondonhouse.com Leading US publisher of honesty tests. www.parinc.com Psychological Assessment Resources Inc., US publisher of personality tests.
CASE STUDY This case study considers an application of psychological testing in selection. The results of a set of personality scales are used, together with other information about the applicant from ability tests, application form, CV, interview, references and a brief presentation. The ability test data were reviewed in the case study in Chapter 3.
Background to the Selection Exercise Newpharm, the pharmaceutical company based in the south east, needs to recruit a Learning Resources Officer, to promote and deliver learning and development among the 750 employees working at the production site, which also doubles as administrative and distribution headquarters. The person specification prepared by HR Manager John White (see Chapter 3 case study) is now extended as follows.
0470861630_05_cha04.fm Page 96 Thursday, December 2, 2004 8:31 PM
96
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Person specification
Assessment method
Relations with others Self-awareness Empathy
Bar-On Emotional Quotient Inventory—intrapersonal Bar-On Emotional Quotient Inventory—interpersonal
Interests Outside interests involving people interview personality Extravert Sociable Calm and self-controlled Confident, free of anxiety
Eysenck PQ—extraversion Eysenck PQ—sociability Eysenck PQ—impulsiveness (low) Eysenck PQ—emotionality (low)
Circumstances Able to live close (20 miles or so) Interview Travel in UK and abroad Interview
The results are as follows:
Newpharm Selection Centre for position of Learning Resources Officer Candidate: Alec Newman Assessment results Person specification
Assessment method
Result
Relations with others Self-awareness Empathy
Bar-On EQI—intrapersonal Bar-On EQI—interpersonal
50th percentile 60th percentile
Interests People
Interview
Wide range of social activities
Personality Extravert Sociable Calm and self-controlled Free of anxiety
Eysenck PQ—extraversion Eysenck PQ—sociability Eysenck PQ—impulsiveness Eysenck PQ—emotionality
75th percentile 60th percentile 15th percentile 20th percentile
Circumstances Able to live close Travel in UK and abroad
Interview Interview
Satisfactory Satisfactory
Sociable and caring for people in the organisation Fits readily into the team, not a troublesome person Likes to be liked and accepted by others
The typical introvert is a quiet, retiring sort of person, not standing out in the workplace May prefer to work alone Tends to withdraw from stimulation
Not anxious Optimistic Doesn't worry unduly Emotionally stable Controlled
Will look ahead at the consequences of actions Often will not take a risk Is aware of the consequences of risk taking and the nature of risk
Cautious and unlikely to take a risk Often unaware of the nature of risk Not sensation seeking
Low in awareness and understanding of the emotions and feelings of others An inability to share in the emotions of others An inability to take on the perspective of another person
Has not attempted to distort the results in anyway Little attempt to reflect socially desirable responeses
Figure 4.14 Alec Newman’s personality assessment
• •
Low Social Desirability
• •
•
Low Empathy
• • •
Low Venturesome
• • •
Low Impulsive
• • • • •
Stable
• •
•
Introvert
• • •
Tender Minded
Name ................................................................... More like this.......
Alec Newman
Answers form colours: PINK and BROWN
1 •
2 •
5 •
5–6
5 •
5–6
5 •
4–5
5 •
6
5 •
2
5 •
4 • 2
0–1
3
5 •
10–11 12–13
4 •
3–4
4 •
3–4
4 •
3
4 •
4–5
3 •
9
0–6 7–8
1–2
0
3 •
3 •
1–2
0
2 •
3 •
1–2
0
2 •
3 •
2 •
1–3
0
4 •
1
0
3 •
4 •
3 •
2 •
2 •
2 •
1 •
1 •
1 •
1 •
1 •
1 •
X
7 •
7–8
7 •
9–10
7 •
4
7 •
8 •
9–10
8 •
11
8 •
5
8 •
9 •
11
9 •
12
9 •
6
9 •
10 •
12
10 •
10 •
7–12
10 •
7 •
8 •
9 •
4–5
6 •
14
6 •
6
7 •
15–16
7 •
7
8 •
17
8 •
8
9 •
18
9 •
7–9 10–11 12–13 14–15
6 •
9–12
10 •
19
10 •
16
10 •
7–9 10–11 12–13 14–15 16–19
6 •
6
6 •
7–8
6 •
3
6 •
Short Scale + IVE Profile Chart
DR BARRY CRIPPS & PARTNER
•
An awareness and understanding of the emotions and feelings of another person An ability to share in the emotions of others An ability to take on the perspective of another person
•
•
Lie
Emp
Ven
Imp
Neu
Ext
Psy
May have disorted results in order to present a positive image May have responded unconsciously in a socially desirable way
Social Desirability
• •
•
Exhibits risk taking behaviour Knows fully that there is a risk involved Sensational seeking
Empathetic
• • •
A complete lack of looking ahead at the consequences of actions Takes a risk readily Unaware of the nature of risk
Venturesome
• •
•
Anxious and worrying Pessimistic A worrier Exhibits emotional swings Sometimes loses control
Impulsive
• • • • •
The extravert tends to make themselves noticed in the work place Does not enjoy working alone Seeks stimulation
Emotional
• •
•
Extravert
•
•
Often alone in the organisational and not overly concerned with people. Can be trouble some in a team, may be a cause of friction in the team Unconcerned about how others view them may like to upset others
Tough Minded
01 06 04 Date ............................................ More like this.....
OCCUPATIONAL APPLICATIONS
0470861630_05_cha04.fm Page 97 Thursday, December 2, 2004 8:31 PM
0470861630_05_cha04.fm Page 98 Thursday, December 2, 2004 8:31 PM
98
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
On the EPQ Alec Newman reports himself to be a stable-extravert. Sociable, caring for people, he should fit into a team. As an extravert, Alec will make himself known, seek challenge and enjoy change. He is not particularly impulsive but may take a risk at times. He can empathise with others, not overly so. His social desirability score indicates an honest and open approach to this questionnaire (Figure 4.14). The assessment data for Alec Newman indicate that he is generally a good match for the post of Learning Resources Officer at Newpharm, although his numerical ability, assessed to meet the budgeting and cost part of the person specification, is rather low.
0470861630_06_cha05.fm Page 99 Thursday, December 2, 2004 8:32 PM
CHAPTER 5
Sifting and Screening
OVERVIEW • Sifting means sorting applications into proceed or reject; sifting is often done inefficiently or unfairly. • Conventional paper application methods can be improved. • Computers and the Internet may greatly change the application process. • There are two biographical approaches: weighted application blanks, which are scored from the application form, and biodata, which are stand-alone questionnaires. • We describe the main stages in generating a biographical assessment. • We compare different approaches to devising biographical questions. • Biographical measures can predict work performance, and also tenure, training success, promotion, absence and creativity. • Biodata can be purely empirical, or can be guided by either a theory of e.g. eliteness motivation, or a relevant construct such as stress tolerance. • Biodata may be compromised by impression management or outright faking. • Biodata rarely seem to attract litigation, but can look unfair or arbitrary to candidates. • Biodata do create some adverse impact on American minorities.
INTRODUCTION The focus of this chapter is getting information about the applicant, mostly from the applicant, via paperwork the applicant completes, rather than by interview, tests etc. This chapter covers in particular: • traditional application forms, CVs/résumés; • online application systems; • biodata.
0470861630_06_cha05.fm Page 100 Thursday, December 2, 2004 8:32 PM
100
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
These approaches have one feature in common—they act as a first sift, and are used to decide who to assess in more depth. As the employer may need to process thousands of applications, sifting needs to be quick—but also to have some accuracy. There are three main issues: • what information to collect; • how to collect it, on paper or online; • what to do with it, i.e. how to make the decision.
WHAT INFORMATION TO GET Employers generally use one or more of five approaches at the first sift stage: 1 2 3 4 5
traditional application form (AF); CV/résumé; competence-based AFs; short tests of personality or ability; biodata.
The first four of these approaches and discussed in the following subsections; biodata is discussed at length later in the chapter.
Traditional Application Form Most large employers have their own printed AF, designed to collect ‘standard’ information, such as address and qualifications, as well as any particular organisationspecific requirements. Anyone who responds to the advertisement might complete and return a form and enter the first sift. Except in times of full employment, employers usually get hundreds of applications so the first stage is to select a handful for more thorough assessment. The traditional application form therefore serves several purposes: 1 to sift applicants; 2 to help interviewers prepare their interview; 3 to form the basis of personnel records. AFs often contain questions that have been there for many years, so that no one presently knows why they are included. Some probably are not used or needed. Some may no longer be considered correct questions to ask.
Recommendation: Review your application form and ask why each question is there, and whether it needs to be included.
0470861630_06_cha05.fm Page 101 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
101
Equal Opportunities and the Application Form The advance of equal opportunities thinking has tended to restrict the information that employers can request at AF stage. In the USA, the Equal Employment Opportunities Commission’s (EEOC) Guide to Pre-Employment Inquiries lists a wide range of AF questions that are suspect, because they may not be job related, and may lead to illegal discrimination: marital status, children, child care; hair or eye colour; gender; military service or discharge; age; availability over holidays or weekends (which may discourage some religious minorities); height and weight, arrest records etc.
Recommendation: Review your application form and ask if any question is likely to screen out protected groups: women, ethnic minorities, disabled people etc.
CV/Résumé For some jobs, people apply by CV/résumé, rather than by AF. (CV stands for curriculum vitae, which is Latin for short summary of life.) The main difference is applicants create their own CV/résumé and can choose what information to give and how to present it. Otherwise, both CVs and AFs have the same function, to enter a first sift. The CV/résumé offers more scope for applicants to try to stand out in first sift, which may be important if the selection ratio is low.
Competence-Based Application Form Recently many employers have sought to improve their AFs by using their competence framework. Applicants are asked to describe things they have done which relate to key competences for the job (Figure 5.1). For example the competence ‘ability to
Innovation. Part of the role of the funding manager is to find and develop new ways of obtaining funding, and operating the funding enterprise efficiently. Think of some occasions when you were able to find a new way of dealing with a problem that had arisen. These can be from any area of your life, not just paid employment. Explain what the problem was, what your solution was and why it was successful Give several examples if possible. Describe each in about 150 words. Dealing with pressure. The funding manager’s job is quite pressured at times. Think of some occasions in the past when you were placed under some pressure. These can be from any area of your life, not just paid employment. Explain what the pressure was, how it affected you and what you did to cope effectively. Give several examples if possible. Describe each in about 150 words
Figure 5.1 Competence-based application form
0470861630_06_cha05.fm Page 102 Thursday, December 2, 2004 8:32 PM
102
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
influence others’ is assessed by ‘describe an occasion where you had to persuade others to accept an unpopular course of action’. The applicant is expected to write 200 words or so, which makes completing the application form much more work for applicants. This method might well improve the AF as a selection assessment, but we do not yet have any research on whether it does. We need to know whether: • assessors agree how much resilience or drive an account indicates (reliability); • screening by behavioural competency application form is more accurate, i.e. selects better applicants (validity); • the method creates any gender, ethnicity or disability differences (adverse impact).
Tests Some employers find short tests of job knowledge useful at the application stage. The US Office of Personnel Management finds 50–60% of applicants for IT work actually know little or nothing about subjects, e.g. C++ programming, they claim expertise in. Some employers are replacing their conventional paper application forms by short questionnaires completed over the Internet. This represents quite a radical change in the sifting process. Formerly HR staff inferred e.g. leadership potential from what applicants said they did in previous employment or at school or university. The new systems assess it more directly by a set of standard questions. This saves time spent reading AFs, and could ensure more standardised assessment of core competences. In effect the conventional AF or CV has been replaced by a short personality questionnaire, moved forward from its usual position at the shortlist stage. This approach might be very effective, but no research has been published on how such systems work. We need to ask the same basic vital questions (yet again!): • Do online screening questionnaires give consistent accounts of applicants (reliability)? • Are they accurate, i.e. do they select better applicants (validity)? • Do they create any gender, ethnicity or disability differences (adverse impact)?
Recommendation: If you introduce a new application system, such as competency-based questions, or tests, have plans to assess its reliability, accuracy and adverse impact.
Honesty A continuing concern is whether applicants tell the truth on AFs and CVs, although there is not much good-quality research on the issue. A recent UK survey (Keenan,
0470861630_06_cha05.fm Page 103 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
103
1997) asked British graduates which answers on their AFs they had ‘made up . . . to please the recruiter’. Hardly any graduates admit to giving false information about their degree, but most (73%) admit they are not honest about their reasons for choosing the company they are applying to. More worrying, 40% feel no obligation to be honest about their hobbies and interests. Most students get advice from university careers offices on how best to present themselves and their achievements at this stage. One of the authors recalls a candidate who claimed to have sailed 60 miles in an afternoon across major shipping lanes, in waters noted for dangerously strong tides and currents, in a very small dinghy. He did not know that one of the panel was an experienced sailor familiar with the area who said the journey he described was quite impossible!
HOW TO GET INFORMATION Traditional selection started with paper applications. Today many employers use online application systems, which work differently. Advertising, making applications, sifting applications and even assessment can all be carried out over the Internet, which can make the whole process far quicker. People talk of making ‘same-day offers’, whereas traditional approaches took weeks or even months to fill vacancies. • More and more jobs are advertised on the Internet, through the employer’s own website or through numerous recruitment sites. • People seeking jobs can post their details on websites for potential employers to evaluate. This gives the job seeker an opportunity that did not exist before (people could make speculative applications to possible employers, but could not advertise themselves on a global scale.) • Many employers now use electronic application systems, eliminating the conventional paper AF. This makes it far easier to store, sort or search applications, and to generate shortlists. Paper applications can be scanned into the database, or else applicants can be required to apply online in the first place. Apparently President Clinton used such a system to select his aides from thousands of applicants. • Internet recruitment can greatly increase the number of candidates, which is good for the employer if it broadens the field of high-calibre applicants, but it does also make the employer’s sifting problem greater. • Aptitude tests or assessments of personality can be completed over the Internet by the applicant. This saves time and travel costs. Internet recruitment and selection has a number of potential problems, however: • Not everyone has access to the Internet. Surveys suggest gender, ethnicity and age differences, which will have possible legal implications; employers cannot afford to find they are sifting out, for example, ethnic minorities because they are less likely to have access to the Internet. Access to the Internet may also be linked to differences in income and education, so Internet recruitment may tend to
0470861630_06_cha05.fm Page 104 Thursday, December 2, 2004 8:32 PM
104
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
exclude the less fortunate. Social exclusion is a major concern of some European governments, and the EC. • The electronic medium does not bypass problems that arise with paper. It is just as easy to lie through a keyboard as it is on paper or in person, and just as easy to give the answer you think the employer wants to hear. • Is someone who logs on to an employer’s recruitment site or sends in an unsolicited CV a job applicant? If they are an applicant, the employer may be obliged to record their gender, ethnicity, disability etc. to be able to prove protected groups are not being sifted out at this preliminary stage. This creates two problems for employers: the sheer volume of enquiries or unsolicited applications some receive, and the absence of the information needed for equal opportunities monitoring.
Resumix There are a number of commercial Internet recruitment systems in general use. These systems can be very useful for filing and sorting applications, and for generating letters to applicants. One of the best known is Resumix, which started operations as long ago as 1988 and boasts many major employers as customers, including the American armed services. Resumix is currently called Hiring Gateway, and costs ‘from’ $200 000. In fact Resumix does more than just scan and file applications; it is also a job analysis system. Resumix has a list of 25 000 KSAs (Knowledge, Skill, Ability). Employers use this list to specify the essential and desirable skills for their particular vacancy, and Resumix searches applications for the best match. Resumix’s list of skills is protected by copyright, i.e. is not released. An article by Jean MacFarland (2000) lists some of the competences Resumix uses, and which it identified in her own CV; they included leadership, budget planning and forecasting, performance assessment, staff education, performance management, performance evaluation etc. Resumix seems to save employers time and money, but may not make life all that easy for job applicants, judging from the number of consultancies in the US offering ‘how to make Resumix applications’ services.
HOW TO USE THE INFORMATION The HR manager who has 1000 applications for a single vacancy needs some method of selecting between 5 and 50 to assess in greater depth, a process called sifting or screening. Sifting can take up a lot of time in HR departments so any way of speeding up the process will be very valuable, so long as it is accurate and fair. The higher the unemployment rate, the bigger the employer’s sifting problem. When every vacancy attracts sacks full of applications, or the electronic equivalent, the question is how to select 20 or so to assess in greater depth. Sifting is not always done very effectively. Northcote Parkinson mentions a standard sifting rule used in the 1950s: first exclude anyone over 40 or Irish (Parkinson, 1958). This sounds very crude, and would of course be illegal today. Another method sometimes used is to recruit via an agency or job centre, and to ration application
0470861630_06_cha05.fm Page 105 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
105
forms, so when 500 forms have all been distributed, no one else can apply. This is also very crude. Application sifting is a danger area for the employer because all sorts of biases may creep in. Some are merely ‘noise’ that probably reduce the accuracy of selection: preferring rugby players or rejecting trainspotters. Other biases are illegal, and potentially very damaging to the employer: gender, ethnicity and disability. Any sifting decisions—explicit or implicit, conscious or unconscious—that create adverse impact on these protected groups can lead to fair employment claims. In the UK, the Commission for Racial Equality recommends that application sifting should be done by two persons (which will discourage obvious bias, but may not detect subtle or shared ones).
How Managers Sift Some research has used policy capturing analyses to reconstruct how personnel managers sift applications. This approach works back from the decisions the manager makes about a set of applications, to identify how the manager makes them. Research in Germany finds that what the managers do, according to the policy capturing analysis, often differs from what they say, when asked to describe how they sift (Machwirth et al., 1996). Managers say they sift on the basis of proven ability and previously achieved position, but in practice reject applicants because the application looks untidy or badly written. A recent review of 49 studies of gender bias in simulated employment decisions finds bias against female applicants, by female as well as male sifters (Davison and Burke, 2000). The less job information is given, the greater the discrimination against females. Another recent piece of research looked at how campus recruiters use grade point average (GPA) (course marks) to select for interview. Some chose students with high marks, which is the logical use of the information, given that GPA does predict job performance to some extent, and that it is linked to mental ability, which also predicts job performance. A second large (42%) group ignored GPA altogether. A third group selected for lower GPA, screening out any applicant with high grades. This does not seem a good way to sift, given the link between work performance and mental ability. The choice of strategy seems essentially idiosyncratic, and could not be linked to type of job or employer.
Recommendation: Review your application sifting process carefully. If possible, analyse how it actually operates, as well as how it is intended to operate, or said to operate.
Training and Experience (T&E) Ratings In the USA application sifting is assisted by training and experience (T&E) ratings, which seek to quantify applicants’ training and experience, instead of relying on
0470861630_06_cha05.fm Page 106 Thursday, December 2, 2004 8:32 PM
106
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
possibly arbitrary judgements by the sifter. T&E ratings have been widely used in the public sector in the USA, especially for jobs requiring specialised backgrounds such as engineering, scientific or research positions. T&E ratings are also useful for trade jobs, where written tests may unfairly weight verbal skills. Several systems are used; applications are assigned points for years of training and education, or else applicants are asked for self-ratings of the amount and quality of their experience. Research indicates T&E ratings contribute increased accuracy to the selection process (a modest validity overall of 0.13). T&E ratings are labour intensive as they require an expert to read carefully every single application. Some methods would be fairly easy to computerise. The behavioural consistency method of T&E ratings gives the best results. Areas that reveal the biggest differences between good and poor workers are identified by the critical incident technique, which collects examples of good and poor performance. Applicants describe their major achievements in these areas, and their accounts are rated by the selectors using behaviourally anchored rating scales (described in Chapter 14). Behavioural consistency T&E ratings are twice as accurate as other T&E systems, achieving a validity of 0.25. They might prove less easy to computerise.
Sifting by Software Software can scan applications and CVs to check whether they match the job’s requirements. This is very much quicker than conventional sifting of paper applications by HR staff; the Restrac system is said to be able to search 300 000 CVs in 10 seconds. Automated sifting systems can eliminate bias directly based on ethnicity, disability or gender, because they are programmed to ignore these factors. They will not necessarily ignore factors linked to ethnicity, disability or gender, such as sports and pastimes. Sifting software will do the job thoroughly, whereas the human sifter may get tired or bored and not read every application carefully. Software should achieve 100% consistency in sifting, whereas fallible human sifters are likely to be inconsistent (i.e. unreliable) in their decisions. Sifting electronically has many advantages, but is not necessarily any more accurate. Accuracy depends on the decision rules used in sifting, which in turn depends on the quality of the research the employer has done. Reports suggest that some scanning systems do nothing more sophisticated than search for key words; once applicants realise this, they will take care to include as many as possible. Resumix says that its software does not use simple word counting, nor is there a list of ‘buzzwords’ that applicants can include to improve their chances of being selected. The system is described as ‘intelligent’ and as able to recognise the contextual meaning of words. The software is copyright and no details are released. Computerised sifting seems likely to work well for fairly specific requirements, such as CIPD membership, or upper second class degree in psychology. It is less clear how well it could sift applications for broader attributes such as drive, resilience, or flexibility. There is an urgent need to know what application sifting programs actually do. The people who sell them are not always very forthcoming, which tends to make
0470861630_06_cha05.fm Page 107 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
107
one wonder if they are doing anything very complicated. Personnel managers should be very wary. Application sifting is a ‘selection test’, both in practice and in law. It selects some people and rejects others. Accordingly, the usual questions should be asked: • Does it select the right people, i.e. people who will do the job well? Accuracy or validity of electronic application sifting appears completely unresearched. • Does it reject protected groups—women, minorities, the disabled—more often? In the case of Hightower et al. v. Disney (1997), non-white applicants claimed that Resumix looked for keywords which were more likely to be used by white applicants. Psychologists tend to be rather sceptical of application processing software, for one fairly simple reason. If these systems are doing something tremendously subtle and complex, where did the people who wrote them acquire this wisdom? • There is no evidence that human application sifters are doing anything highly complex that software can model. • There is no body of research on application sifting that has described any complex subtle relationships to put into your software. Research on human decision-making in HR and related areas (e.g. medical diagnosis) consistently finds that people make their decisions in a fairly straightforward way, which can be characterised as additive. Additive means that the human expert reads the application form, and notes good points and bad points. The expert then reaches a decision by adding the good points and subtracting the bad points, the resulting total being their decision. What human experts do not seem to do is search for, or use, more complex patterns, such as saying exam marks are important in school leavers or graduates under the age of 22 but not in people over 25. (This conclusion derives from policy capturing research: many sifters will tell you they use complex decision rules, but analysis of the decisions they actually make does not confirm this.) We should perhaps remember one of the things we are told about computers but tend to forget: they are very good at doing mindlessly simple tasks extremely quickly, but cannot think for themselves, and cannot do anything unless given very precise and detailed instructions. If we know how to sift applications, the computer might be able to do it very quickly, but if we do not, the computer will not find a magic way of doing it.
Recommendation: If you are thinking of buying an application sifting package, ask what it actually does, and what evidence the supplier has on validity and adverse impact. Do not expect computers to do your thinking for you.
0470861630_06_cha05.fm Page 108 Thursday, December 2, 2004 8:32 PM
108
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
BIODATA Biodata represents the most ambitious transformation of the conventional application form, using statistical analysis of applications to identify details that predict success. Biodata is not new. Over 80 years ago, Dorothy Goldsmith devised an ingenious new solution to an old problem: identifying the small minority of people who could succeed at selling life insurance. She took 50 good, 50 poor and 50 middling salesmen from a larger sample of 502, and analysed their application forms. She identified factors that collectively distinguished good from average and average from bad: age, marital status, education, (current) occupation, previous experience (of selling insurance), belonging to clubs, whether the candidate was applying for full- or part-time selling and whether the candidate himself had life insurance. Some items—married/single—were scored +1/−1. Other attributes like age or education were more complicated: the best age range for selling insurance was 30–40, with both younger and older age groups scoring less well. The system proved effective. Low scorers in Goldsmith’s sample almost all failed as insurance salesmen; the small minority of high scorers formed half of a slightly larger minority who succeeded at selling life insurance (Figure 5.2). Goldsmith had turned the conventional application form into a weighted application blank (WAB). The principle is familiar to anyone with motor insurance. The insurance company analyses its records to find what sort of person has more accidents: people who drive sports cars, people who live in big cities, people who run bars etc. Insurers do not rely on common sense, which might well suggest that younger drivers, with faster reflexes, will be safer; they rely on their records, which
30
150 ×
20
100
×
50
Percentage successful
Number of subjects
200
10
× Low Med. High Low Med. High Low Med. High Low
Medium
High
Sales WAB score
Figure 5.2 Results from the first published weighted application blank (WAB) Source:
Data from Goldsmith (1922).
0470861630_06_cha05.fm Page 109 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
109
show that young drivers are a poorer risk. If insurers can calculate premiums from occupation, age and address, perhaps personnel managers can use application forms in the same way, as a quick but very powerful way of selecting employees. Early biographical systems collected information through the conventional application form, whereas nearly all current systems use a separate questionnaire. A separate questionnaire makes it clear to the candidate that he/she is being assessed, so loses the invisibility of AF-based formats. Traditional biodata screening was done on paper and scored by hand, but could equally be administered over the Internet, and processed by computer. Biodata has the advantage of established technology, and a body of research on how to use the information Biodata has a number of features: • It starts by analysing past applicants to identify facts or items that are linked to the outcome you are trying to predict. • It needs very large numbers to work properly. • It must be cross-validated before being used, i.e. items must be shown to work in both of two separate samples. As a consequence biodata is expensive to set up, but cheap to use once in place. • It is backward looking; it will find ‘more of the same’, e.g. more managers like present successful managers. This may be a problem in rapidly changing industries, where ‘cloning’ can be dangerous. Even so ‘finding more of the same’ is better than ‘selecting’ at random, which is all some selection methods can achieve. Figure 5.3 gives some typical biodata questions.
Constructing Biographical Assessments To devise a biodata, the employer will need: 1 Two or more groups of people who differ in some key respect, e.g. good at selling/ poor at selling, regarded as thieves/regarded as honest. Biographical measures need large numbers at this stage. Most experts recommend a minimum of 200 in each group, i.e. 200 good salespersons and 200 poor. 2 A large number of possible biographical questions. Biodata questions can be drawn from ready-made item banks, such as Glennon’s Catalog of Life History Items, which lists 484 biographical items. However the trend is increasingly toward generating questions that are conceptually related to what you are trying to predict. 3 Both groups (e.g. good and poor sales staff) answer the entire set of biographical questions, and the answers are analysed to look for differences that can be made into a biodata. There is a risk of failing to find any differences and so failing to generate a test at all. An attempt to write a biodata for selecting ministers of
0470861630_06_cha05.fm Page 110 Thursday, December 2, 2004 8:32 PM
110
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
1. How old was your father when you were born? 1. about 20 2. about 25 3. about 30
4. about 35
5. I don’t know
2. How many hours in a typical week do you engage in physical exercise? 1. none 2. up to 1 hour 3. 2–3 hours 4. 4–5 hours 5. over 5 hours 3. In your last year at school, how many hours in a typical week did you study outside class hours? 1. none
2. up to 2 hours
3. 2–4 hours
4. 5–8 hours
5. over 8 hours
4. How old were you when you first kissed someone romantically? 1. 12 or under 2. 13 or 14 3. 15 or 16 4. over 16
5. never kissed anyone romantically
5. How interested in current affairs are you? 1. not at all
2. slightly
3. fairly
4. very
5. extremely
6. Which best describes your present height/weight ratio? 1. definitely overweight
2. somewhat overweight
3. slightly overweight
7. How often did you play truant from school? 1. never 2. once or twice 3. 3 to 10 times
4. just right
4. once a month
8. What do you think of children who play truant from school? 1. very strongly 2. strongly 3. disapprove 4. unconcerned disapprove disapprove
5. underweight
5. once a week or more
5. can sympathise
9. My superiors at work would describe me as 1. very lazy
2. fairly lazy
3. average
4. quite hard working
10. How many times did you have to take your driving test? 1. once 2. twice 3. three 4. four or more
5. very hard working
5. never taken it
Figure 5.3 Some typical biodata questions. It is instructive to ask yourself which of these could be verified by the employer. How? At what cost?
religion failed at this first stage: no biographical differences between ‘successful’ and less successful minsters were found. 4 The essential last step in writing a biodata is cross-validation, which means doing stage 3 a second time. The original samples are used to construct the biographical measure. The biodata’s accuracy can only be calculated from a second set of samples (known as cross-validation or hold-out samples). This means the employer will need a second set of groups of 100 each. A biodata that has not been cross-validated should not be used for selection. Some biographical details will appear to distinguish the groups but have in fact got in by chance; crossvalidation checks that every question consistently distinguishes good from poor
0470861630_06_cha05.fm Page 111 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
111
salespeople, thieves from honest employees etc. The second set of samples is used to calculate the biodata’s accuracy. To summarise, the steps involved in writing a biodata are: • identify two or more groups of people who differ in whatever you are trying to predict; • generate a pool of biodata questions that relate to whatever you are trying to predict; • analyse the data to devise a first-draft biodata; • cross-validate on two or more further groups of people who differ in whatever you are trying to predict; • analyse the data to devise a final draft of the biodata. This is a substantial undertaking, and therefore costly. Once in place, however, biodata can improve the sifting stage of selection considerably for some years to come (but not forever).
SOME TECHNICALITIES OF BIODATA CONSTRUCTION Types of Biodata Question Biodata questions can be divided into hard, which are verifiable but also often intrusive, like question 6 in Figure 5.3, and soft, which cause less offence but are easier to fake, like question 5. Some questions ask about controllable aspects, while others do not; we choose our exercise pattern (question 2) but cannot choose our parents’ ages (question 1). Some biodata systems adopt a policy of avoiding non-controllable questions, because they make biodata harder to defend against criticism. Questions asking about present behaviour prove as successful as those asking about past behaviour, and may be more acceptable, because they avoid the suggestion of rejecting people because of past behaviour they cannot now change. Many biodata contain questions about attitudes, such as question 8. Some biodata questions ask what other people think of you, e.g. question 9, which is verifiable, at least in theory. Including this type of question makes the biodata resemble a personality test.
Empirical Keying Traditional biographical methods were purely empirical. If employees who stole were underweight, or had no middle initial, or lived in Hackney, those facts entered the scoring key. Purely empirical measures worry some psychologists, who like to feel they have a theory. They are not happy knowing people with no
0470861630_06_cha05.fm Page 112 Thursday, December 2, 2004 8:32 PM
112
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
middle initial are more likely to steal; they want to know why. Ideally, they would like to have predicted from their theory of work behaviour that people with no middle initial will make dishonest employees. Critics of pure empiricism also argue that a measure with a foundation of theory is more likely to hold up over time and across different employers, and may be easier to defend if challenged. Trying to explain in court the connection between no middle initial and theft could be difficult!
Factorial Keying The first attempt to give biodata a more theoretical basis relied on factor analysis, a statistical technique for identifying themes in biographical information. If no middle initial proved to be linked to half a dozen other biographical facts, all to do with, for example, a sense of belonging, we begin to get some idea why it relates to honesty in the workplace, we can explain to critics why it is included, and we can perhaps search for better questions to reflect the underlying theme.
Rational Keying Modern approaches to biographical assessment generally use a more targeted approach. John Miner (1971) wanted to assess ‘eliteness motivation’, so he stated specific hypotheses about eliteness, for example that status-conscious Americans will serve (as officers, of course) in the navy or air force, rather than the army. Recent biodata sometimes use a behavioural consistency approach. If job analysis indicates that the job needs good organisational skills, questions are written that reflect this, either on the job—‘How often do you complete projects on time?’—or off the job— ‘To what extent do you prepare when going off on holiday?’ These questions are still tested, with 200 well-organised and 200 poorly organised employees, to check that they work (and still double-checked or cross-validated with two further groups). This approach has two advantages. It has a logic to it so it is easier to defend, and it is more versatile; a biodata for organisational skills can be used for any job that requires such skills.
DO BIODATA WORK? There is an extensive body of research in the USA and the UK on the effectiveness of biodata in selection. Two recent reviews offer very similar overall estimates of biodata validity, placing it at 0.30 (Bliesener, 1996; Bobko and Roth, 1999). This indicates biodata can make a very effective way of screening applicants. Recall that most other first-sift systems either offer little or no evidence of their effectiveness, or have been shown to lack much power to identify good applicants.
0470861630_06_cha05.fm Page 113 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
113
Absence Creativity Tenure Training success Objective performance* Proficiency rating 0
0.05
0.10
0.15
0.20
0.25
0.30
0.35
Validity
Figure 5.4 Validity of biodata for six aspects of work Source: Data from Bliesener (1996). * Covers both production and sales.
Different Outcomes Biodata have been used more extensively than other selection measures to predict a variety of work-related outcomes (as well as some non-work outcomes such as credit rating). Figure 5.4 shows: • biodata predict general proficiency at work, assessed by supervisor ratings; • biodata are good at predicting objective performance indictors such as sales and production; • biodata can predict absence and tenure, but less accurately. These are, however, notoriously hard to predict, possibly because they depend more on how the organisation is run than what sort of people it selects.
Different Types of Work Figure 5.5 shows biodata achieve good accuracy in selecting for a range of occupations: managerial, sales, clerical and science/engineering. They seem to work less well—but still better than chance—for military selection. Biodata have also proved useful for more specialised work, such as research achievement in science and technology, and success in pilot training.
Incremental Validity Biodata are often used in conjunction with other methods, which raises the question of incremental validity. Does the combination of, for example, biodata and mental ability
0470861630_06_cha05.fm Page 114 Thursday, December 2, 2004 8:32 PM
114
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Science/engineering Military Clerical Sales Managers 0
0.1
0.2
0.3
0.4
0.5
Validity
Figure 5.5 Validity of biodata for five areas of work Source:
Data from Bliesener (1996).
test improve much on the accuracy achievable by either used alone? Several studies suggest that biodata can improve on the predictions made by psychological tests, for attrition in army recruits, proficiency in clerical work and organisational citizenship (Mael and Ashworth, 1995; McManus and Kelly, 1999; Mount et al., 2000). However, it has been suggested that biodata and mental ability tests sometimes correlate quite highly, which implies that biodata may not always achieve much incremental validity.
Consortium Measures Biographical measures need large samples, which appears to rule them out for all except very large employers. However, organisations that do not employ vast numbers can join a consortium to devise a biodata. The Aptitude Index Battery (AIB) is a consortium biodata used by the North American insurance industry since the 1930s. AIB is presently called Career Profile and is part of the Exsel system. AIB data have been analysed for 12 000 insurance sales staff from 12 large insurance companies. AIB works in all 12 insurance companies, but is more successful in larger, better run companies that recruit through press adverts and agencies, than in smaller companies that rely more on personal contacts. AIB has been rewritten and re-scored many times, but has retained some continuity. Another consortium biodata is the (US) Supervisory Profile Record, which was originally written using staff from 39 separate organisations, and which works well across all 39, regardless of gender, race, supervisory experience, social class or education.
Recommendation: Consider whether you could benefit from introducing biodata to sift applicants.
0470861630_06_cha05.fm Page 115 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
115
PROBLEMS WITH BIODATA Fakability ‘Hard’ biographical questions, e.g. father’s age, can be checked, at a cost. However, many biodata questions cannot be checked, e.g. age of first kiss (item 4 in Figure 5.3) or interest in current affairs. There is no sensible way of questioning the information the applicant gives you about such issues. Biodata inventories may be prone to faking, because they are visible to the applicant, and because many biodata items are not objective or verifiable. The problem of biodata faking has a series of parallels with faking personality inventories, discussed in Chapter 4: • people directed to fake good can usually improve their biodata scores; • people directed to fake good distort their answers far more than job applicants, so directed faking research may be misleading; • the extent of faking by real job applicants is uncertain. One study found that only 3 items of 25 were answered differently by job applicants; these items were less historical, objective and verifiable (Becker and Colquitt, 1992). On the other hand, another study compared applicants—motivated to fake—with people who already have the job—who have less need to fake—and found applicants’ answers much ‘better’ in areas like preferred working climate, work style or personal and social adjustment (Stokes et al., 1993). As with personality tests, there seems to be a faking problem, but we are not sure how big it is. There are several possible ways of dealing with faking in biodata, again tending to repeat approaches tried with personality tests: • More objective and verifiable items create fewer differences between applicants and people who already have the job, so may be less fakable. • Warning people that the biodata includes a lie-detection scale (even if it does not) reduces faking. • A trick question about experience with the ‘Sonntag connector’, a non-existent piece of electrical equipment, caught out one-third of applicants, who claimed to have used one. This third got better overall scores on the biodata, showing they were faking, but their biodata score correlated less well with a written job knowledge test, suggesting they were not better applicants. However, trick questions are unethical, and might not work for long in practice. • Biodata measures can be provided with faking good scales, modelled on faking good scales in personality measures, e.g. ‘I have never violated the law while driving a car’. These are intended to identify applicants who ‘deny common faults’ and ‘claim uncommon virtues’. Applicants with high ‘faking good’ scores are discarded.
Fairness and Equal Opportunities If biographical questions are linked to race, sex, disability or age, these protected minorities may get lower scores on biodata measures, so giving rise to problems of
0470861630_06_cha05.fm Page 116 Thursday, December 2, 2004 8:32 PM
116
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
adverse impact. Most research has focused on ethnicity in the USA. An early review concluded that biodata did not by and large create adverse impact on ethnic minorities applying for work as bus drivers, clerical staff, army recruits or supervisors (Reilly and Chao, 1982). Another analysis found only fairly small ethnicity differences in biodata scores. The most recent analysis, however, is less optimistic, and concludes that differences between Whites and African Americans when completing biodata measures are large enough to create a problem. Although biodata may have the potential to cause equal opportunities problems, to date they have not figured in many court cases in the USA. Table 12.2 summarises US adverse impact evidence for various selection methods. It shows that biodata create far less adverse impact than tests of mental ability. Individual biodata questions might discriminate against protected groups in subtle ways. For example, questions about participation in sport could discriminate against disabled people. American experts say it is unclear whether biodata could be legally challenged question by question, so a few questions creating adverse impact may not matter. In fact, it is extremely unlikely that no gender, age, ethnicity or disability differences would be found in any of 50–100 biographical questions. However, if many questions show, for example, ethnicity differences, it tends to follow that total scores will also differ. It is important, therefore, to review all biodata measures thoroughly, question by question, and ask: might any of these questions generate differences between groups? Then when you have used the biodata, ask: do any questions generate differences between men and women, majority and minority? Recommendation: Before using a biodata, review the items very carefully; ask equal opportunities experts if any items are likely to find differences.
Privacy As Figure 5.3 shows, some biodata questions can be intrusive. In the Soroka v. DaytonHudson case, a department store was sued by an applicant who claimed he had been asked intrusive questions about politics and religion, contrary to the Constitution of the State of California. The case was settled out of court, so no definitive ruling emerged. Use of biodata in the USA is complicated by the fact that the 52 states all have their own, differing laws about privacy, so questions about, for example, credit rating are legal in some states but not in others.
Public Relations Some psychologists have proposed the Washington Post test for biodata plausibility. Imagine two headlines in the Washington Post. One headline reads ‘Psychologists reject people for management training because they don’t like the colour blue’, which sounds arbitrary and unfair. Another headline reads ‘Psychologists reject people for management training because they held no positions of responsibility at school’, which sounds much more reasonable.
0470861630_06_cha05.fm Page 117 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
117
Recommendation: Before using a biodata, review the items very carefully and ask yourself if you would feel uneasy about answering any of the questions. After using a biodata, check the results for adverse impact.
The Need for Secrecy? Early studies published their biodata in full, confident that cannery workers or shop assistants did not read the Journal of Applied Psychology and could not discover the right answers to give. If the scoring system becomes known, a biodata measure can lose its effectiveness. A new form of the insurance industry’s AIB worked well while it was still experimental, but lost its validity as soon as it was used for actual hiring (Figure 5.6). Field managers scored the forms and were supposed to use them to reject unsuitable applicants; instead they guided applicants they favoured into giving the right answers. When scoring was moved back to head office, the AIB regained its validity. It is doubtful whether any selection method can be kept completely secret these days because all are likely to come under intense legal scrutiny. Biodata have proven effectiveness in screening applicants for a variety of HR purposes. They are easy to include at the screening stage in paper form, and would
60
Percentage successful
50
× Grade A
× ×
40
×
30
×
× ×
× ×
× Grade C
× 20
× ×
×
1
2
× Grade E
10 3 Year
4
5
Figure 5.6 Success rates for three groups of insurance sales staff, identified by biodata as likely to be good (grade A), average (grade C) or ineffective (grade E), for 5 successive years. In year 3 scoring was devolved to branch office. In year 5 it was moved back to head office Source:
Data from Hughes et al. (1956).
0470861630_06_cha05.fm Page 118 Thursday, December 2, 2004 8:32 PM
118
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
be equally easy to include in online recruitment systems. They are specifically designed to be easy to administer and easy to score, requiring no human judgement. They do, however, require detailed analysis of data from large, carefully selected groups of people. The same, however, is likely to be true of any other applicant screening system, whether computerised or paper in form.
CASE STUDY Scale 101 We have mentioned several times the problem of screening people who work with children, to keep out those who have the wrong motives. There have been a number of cases where people have got work in child care facilities or schools, and used their position to abuse children. We need a way of detecting such people and excluding them from working with or near children. Several approaches have been suggested, and some implemented: • People applying to work with children cannot withhold information about criminal convictions, even when they are ‘spent’ under the Rehabilitation of Offenders Act. • The Criminal Records Bureau has been set up to allow employers to check criminal records (not always successfully, as the ‘Background Checks’ case study in Chapter 11 shows). • The Waterhouse and Walner Reports have suggested personality testing. • Some paedophiles are convinced their behaviour is acceptable, and will try to argue that sexual contact does not harm children, that children want it, that it is good for their development etc. It would be unwise to rely on paedophiles stating such ‘self-justifying beliefs’ when applying for jobs. • Only employ women in child care work, on the basis that 99% of people who abuse children sexually are male. This would, of course, be illegal, and also unfair on men who want to work with children for the right reasons. The problem with screening is that no one is going to admit to paedophile tendencies, if they are trying to get a job working with children. Most methods of assessing applicants rely on what the applicant tells you, on the application form, at the interview and in self-report tests. We can expect paedophiles to fake good on personality tests, lie in interviews and conceal past behaviour, including convictions. One possible way forward might be biographical assessment. If we could find biographical pointers that distinguish paedophiles from other people, we could use a biographical screen to identify at least some dubious applicants. The traditional approach would be to assemble a long list of questions, taken from perhaps the Glennon Catalog of biographical questions, find a group of 200 paedophiles and a group of 200 people matched for age, social background etc., and hope to find enough biographical differences to constitute a workable screening test. We would then need a second pair of groups, 100 of each, to weed out biographical
0470861630_06_cha05.fm Page 119 Thursday, December 2, 2004 8:32 PM
SIFTING AND SCREENING
119
details that have got in by chance, and do not really distinguish paedophiles from non-paedophiles. Scale 101 takes the more focused approach favoured in recent biodata construction. There is an extensive research literature on the nature and origin of paedophile tendencies, and on differences between paedophiles and non-paedophiles, in areas such as their early life, and their (non-sexual) interests and preferences. We combed through this literature to try to pick out anything that could be used as a biographical item. Our first draft contains 101 items, hence the title Scale 101. We now need to go through the traditional biodata generation stages described above. We need to: 1 2 3 4 5
get Scale 101 completed by 200 paedophiles; get Scale 101 completed by 200 control persons; produce a first-draft biodata; repeat steps 1 to 3 with two more groups; produce a working draft biodata.
Stage 1 is quite difficult. There are over 110 000 convicted paedophiles in England and Wales. But finding some when you want them is not easy. We have addressed the problem of faking in two ways. The first is to try to devise subtle items that will not suggest a ‘right’ answer. The second is to incorporate a faking good scale, to pick out people who are faking good. Further stages of the research programme will assess the success of these approaches: 6 get Scale 101 completed twice by the same group, first honestly, then under directed faking good instructions; 7 get Scale 101 completed twice by the same group, first as part of the selection process for child care work; second, later on, when they are in post. People who have the job have less need to fake than people trying to get the job. The final stage will be a follow-up study, where people applying for child care work complete. Scale 101 as part of the selection procedure, and we subsequently compare those who have sexually abused children with those who have not. This will take a very long time, because there will not be many abusers. It is, however, the ultimate test of whether Scale 101 works. We have included two sample items from Scale 101. We cannot include any more, partly on grounds of space, but principally so as not to compromise the measure by letting the people it is intended to detect find out what answers to give. 1. Which period of your life do you consider to have been the happiest? when I was aged less than 10 when I was aged 10 to 14
0470861630_06_cha05.fm Page 120 Thursday, December 2, 2004 8:32 PM
120
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
when I was aged 15 to 19 when I was aged 20 to 29 when I was aged over 30 none of these cannot say (This item is based on the assumption that many paedophiles were happiest as children, and have not achieved a satisfactory adjustment to adult life.) 2. Which statement best describes your relations with your parents? I got on very well with my parents I got on fairly well with my parents I got on well with my parents some of the time my relations with my parents were fairly poor none of these cannot say (This item is based on the assumption that many paedophiles had poor relations with their parents. This question alone would not of course distinguish paedophiles from other people. Biodata items are intended to build a cumulative picture.)
0470861630_07_cha06.fm Page 121 Saturday, December 4, 2004 3:22 PM
CHAPTER 6
References and Ratings
OVERVIEW • The opinions of others about the applicant can be useful. • Others’ opinions are collected through references and peer ratings. • References are used to check facts, to check for problem behaviour and to assess quality of work. • Reference requests take various forms, including unstructured, rating and competence-based formats. • References may not be very reliable. • References do not appear to predict work performance very well. • References are inaccurate through leniency and idiosyncrasy. • The legal position of the reference has changed, making many employers reluctant to provide them. • We review possible ways of making the reference more useful. • Peer ratings can be collected in various forms. • Peer ratings are fairly reliable. • Peer ratings can predict work performance quite well. • There are major limitations on the possible use of peer ratings.
INTRODUCTION Some people know your applicants very well, far better than you, the prospective employer. Former employers, school teachers, colleagues or fellow trainees have seen the applicant all day, every day, perhaps for years. They can report how he/she usually behaves, and perhaps what he/she is like on ‘off days’. They know about applicants’ typical behaviour as well as their best behaviour, so are potentially very useful assessments. The traditional reference relies on this source of information; so does the peer rating, when the applicant’s colleagues or fellow applicants are asked to describe him/her.
0470861630_07_cha06.fm Page 122 Saturday, December 4, 2004 3:22 PM
122
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
REFERENCES The most recent survey by the CIPD (2002) shows that most British employers still use references as a standard part of their selection systems. Three-quarters of all employers use references, but with a definite divide between the public and the private sectors. Nine out of ten public sector employers use references, but only seven out of ten in the private sector. Another recent UK survey by IRS Employment Review reports that some employers will refuse to employ on the strength of a bad reference, and that many use the reference as a final check, rather than to screen earlier in the selection process. The Cranfield Price Waterhouse survey in Figure 6.1 shows that references are widely used throughout Western Europe. Similar surveys show that most American employers also take up references on new employees. References can be used for several different purposes: • to check facts; • to check if the applicant poses any risk; • to get an opinion on the quality of the applicant’s work.
Checking Facts If an applicant claims a medical degree from Boringold University, a letter will check if he/she really has, and whether it really was a first class degree. Hospitals have occasionally been very embarrassed to find ‘Doctor’ Smith who has been operating on patients for several years does not in fact possess any medical qualifications.
Employers using references (%)
100 90 80 70 60 50 40 30 20 10 y rk e Tu
en
ay
ed Sw
or w N
k
nl an d Fi
ar
s en m D
y
rla nd he
et N
G
er
m
ai
an
n
l
Sp
rtu ga Po
nd
an ce Fr
la Ire
Br ita
in
0
Figure 6.1 The percentage of employers using references in 12 European countries Source:
Data from Dany and Torchy (1994).
0470861630_07_cha06.fm Page 123 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
123
Checking for Risk In the USA the law has imposed on employers a duty to check employees carefully for any tendency to harm customers. If an employee attacks a customer, and the employer could have found out that the employee posed a risk, the employer can be sued for negligent hiring. Consequently, all employees who meet the public should be assessed for violent or criminal tendencies. An American employer who fails to take up references with previous employers will find it difficult to defend a negligent hiring claim. Whether the reference will reveal a problem is another matter; taking up references shows the employer has at least tried. Checking facts and risk can also be done by background checks (Chapter 11).
Checking Work Performance References can be used to assess the applicant’s work performance. Survey data indicate that many UK employers hope to learn about absence, punctuality and disciplinary record. In fact nine out of ten reference requests want to know about absence. The request in Figure 6.2 hopes to learn about the applicant’s teaching ability, research record and administrative abilities.
Recommendation: Ask yourself what you hope to learn about applicants from references.
Structure Requests for references can be structured or unstructured. The traditional unstructured reference in effect says ‘Tell me what you think of John Smith in your own words’. Unstructured references are still widely used on both sides of the Atlantic. Structured references use closed questions, checklists or ratings. The IRS survey (2002) finds most reference requests in the UK try to impose some structure on the information the referee provides. The US civil service has used a structured reference request form, the Employment Recommendation Questionnaire (ERQ), which covers: • Occupational ability: skill, carefulness, industry, efficiency, recorded through rating scales. • Character and reputation. • Is the applicant specially qualified in any particular branch of trade in which he seeks employment? • Would you employ him in a position of the kind he seeks? • Has the applicant ever been discharged from any employment to your knowledge? If yes, why?
0470861630_07_cha06.fm Page 124 Saturday, December 4, 2004 3:22 PM
124
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
University of East Glamorgan Department of Human Resource Management 29 February 2004 Dr Alfred J. Smithson Department of Psychology University of Upper Rhondda Dear Dr Smithson Dr H. Burke The above named person has applied for the post of Professor of Psychology at this University, and given your name as a referee. We would be most grateful for your opinion of this candidate’s suitability. It may be useful to note that the main duties of the post divide into the areas of teaching, research and administration. Teaching. The post involves teaching by large formal lecture, and in smaller groups, running laboratory classes, and also one-to-one supervision of research at undergraduate and postgraduate levels. Research. We are looking for a candidate with a proven record of obtaining research funding, and of publishing research in good-quality refereed journals. Administration. The post has a range of administrative responsibilities, including attending committee meetings, completing and maintaining student records, as well as general administrative tasks, e.g. health and safety reviews. The candidate appointed will be eligible to serve as Head of Department for five years, so could usefully have some competence in a managerial or leadership role. Yours faithfully, Alex Jones Director of Human Resources
Figure 6.2 A typical reference request of the traditional open ended or unstructured type
Competence-Based Reference Formats Some reference requests use the organisation’s competence framework to give a detailed view of work performance, and may ask for behavioural evidence of, for example, resilience or time management, as well as an opinion; Figure 6.3 shows part of a typical request of this type. This format needs to be used with caution; being asked for information you do not have, and are not likely to have, can be annoying to the referee.
Recommendation: Consider using some structure in your reference requests, either ratings or competences or both. But ask yourself whether the referee can supply the information you request.
0470861630_07_cha06.fm Page 125 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
125
Nadirco plc Nadir House Lower Bevindon Dr J.H. Hancock Department of Mathematics University of Upper Rhondda 31 June 2004 Dear Dr Hancock Leslie Hare The above named person has applied for the vacancy of trainee HR associate at Nadirco, and has given your name as a referee. Nadirco uses a competence system, for all vacancies. We would therefore like your view of Leslie Hare under the nine headings listed below. 1. 2. 3. 4. 5.
interpersonal sensitivity drive innovation influence intellectual ability
6. 7. 8. 9.
flexibility probity technical knowledge time management
For each competence, please give your overall assessment of the candidate on the scale: 5. very good 4. good 3. average
2. poor 1. very poor
Then please give examples of the candidate’s behaviour or achievements which form the basis of your assessment. 1. Interpersonal sensitivity. Listens well, encourages and elicits contributions from others and responds constructively. Can establish cooperative relationships with others and defuse potential conflicts. Is able to build useful relationships and alliances with others. Shows awareness of diversity and is sensitive to issues of gender, ethnicity, disability and social exclusion. rating: evidence:
2. Drive. Able to set well-defined and challenging goals and milestones for his/her work. Shows unflagging energy and enthusiasm and success, across a wide range of varied academic, employment or extracurricular activities, Shows determination in overcoming obstacles. Always meets deadlines. rating: evidence:
.. . Thanking you in advance for your assistance Yours sincerely D.B. Jones Director of Human Resource Management
Figure 6.3 A reference request seeking behavioural evidence of competences
0470861630_07_cha06.fm Page 126 Saturday, December 4, 2004 3:22 PM
126
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
ACCURACY OF REFERENCES Considering how many employers use references, and how long they have been around, there is surprisingly little research on their accuracy. Researchers draw their usual distinction between reliability, meaning agreement between referees, and validity, meaning whether referees convey accurate information.
Reliability Research shows referees seem to agree among themselves fairly poorly about applicants. References given by supervisors bear no relation to references given by acquaintances, while references by supervisors and co-workers (who both see the applicant at work) agree only very moderately. Agreement between referees can be compared with a larger body of research on agreement in performance appraisal and 360° feedback (where people are rated by supervisors, peers and subordinates). Agreement between raters is always fairly low—0.60 at best, often much less. Murphy and DeShon (2000) make the point that different people see different sides of the target person, so would not be expected all to say the same about him/her. And if they did all say the same, what would be the point of asking more than one person? Some very early British research approached the problem from a different slant, and found references more useful. The Civil Service Selection Board (CSSB) in the 1940s collected references from school, college, armed services and former employers. CSSB staff achieved moderately good agreement in assessments of candidates based on references alone. CSSB used five or six references, not two or three, which may increase reliability. Note that this research is addressing a slightly different issue: whether the panel agree about what the five or six referees collectively are saying about the candidate, not whether the five or six referees agree with each other. The CSSB research suggests that references can provide useful information, but that some time and care may be needed to extract it.
Validity Validity or accuracy of references is analysed in the usual way, by comparing the reference with an index of work performance. This is straightforward with structured references, which require referees to express their opinions in numerical form. The researcher calculates correlations between the reference rating and subsequent performance appraisal ratings. The earliest research, on American public sector references, gave very disappointing results, as Figure 6.4 shows. For some occupations, ERQ had zero validity, while for the other occupations, ERQ achieved only very limited validity. In the 1980s, reviews of all available US data concluded that reference checks give poor predictions of supervisor ratings (r = 0.26). European research gives slightly better results. References for candidates for UK naval officer training by head teachers correlate moderately well with training grade
0470861630_07_cha06.fm Page 127 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
127
Ordnanceman Ordnanceman Printer Printer Car mechanic mechanic Car Forklift operator operator Forklift Machine operator operator Machine Radio mechanic mechanic Radio Machinist Machinist 00
0.05 0.05
0.1 0.1
0.15 0.15
0.2 0.2
0.25 0.25
0.3 0.3
Figure 6.4 Validity of references for seven jobs in the US public and military sectors Source:
Data from Mosel and Goheen (1958).
in naval college (Jones and Harrison, 1982). Head teachers may be more likely (than, say, former employers) to write careful and critical references, because they know they will be writing naval college references for future pupils, and because their own credibility is at stake. The most recent research, on telephone references for German sales staff, finds positive but low validity (Moser and Rhyssen, 2001). Assessments with low validity can be useful, especially if they are cheap to make, which references are.
REASONS FOR POOR ACCURACY Anyone who has either written references, or used them to try to evaluate applicants, can list some of the reasons why they often are not very accurate: • • • • •
referees may lack the time or motivation to write careful references; employers ask referees for information they do not have; referees may follow a hidden agenda; referees are too idiosyncratic; references are consistently favourable (leniency).
Writing a careful reference takes time and effort; referees are not directly paid for their time, so may not make much effort. From a long-term point of view, writing a reference is ‘paid for’ by getting references back from other employers when you need them. Sometimes referees follow hidden agendas, to retain good staff and ‘release’ poor staff, by writing deliberately untrue references. Sometimes this agenda is not even hidden: employees may agree to leave if the employer writes a ‘good’ reference. (If they are leaving because they have done something wrong, writing a good reference may create problems for the employer.)
0470861630_07_cha06.fm Page 128 Saturday, December 4, 2004 3:22 PM
128
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Many employers ask for information the referee does not have: university staff for example are often asked about students’ health, honesty, ability to relate to children or likely management potential—aspects of the student unlikely to be evident in seminars or lectures.
Idiosyncrasy One researcher searched medical school files to find 20 cases where the same two referees had written references for the same two applicants (Figure 6.5). If references are useful, what referee A says about applicant X ought to resemble what referee B says about applicant X. Analysis of the qualities listed in the letters revealed a quite different pattern. What referee A said about applicant X did not resemble what referee B said about applicant X, but did resemble what referee A said about applicant Y. Each referee had his/her own idiosyncratic way of describing people, which came through no matter who he/she was describing. The unstructured reference apparently tells you more about its author than about its subject! Differences in reference writing may also reflect personality (but of the author, not the applicant); a recent study confirmed that happier people write more favourable references.
Leniency Numerous studies report that most references are positive, nicknamed the ‘Pollyanna effect’ after the little girl who wanted to be nice to everyone. Analysis of US public service references found ‘outstanding’ or ‘good’ ratings greatly outnumbered ‘satisfactory’ or ‘poor’. Candidates were hardly ever rated ‘poor’. More recently, two parallel surveys of US psychologists were reported (Grote et al., 2001); the first set of psychologists say they will disclose negative information in the references they write, but the second set complain they are rarely given any negative information in the references they receive. One suspects similar results would emerge from surveys of many other areas of employment.
Applicant Y
Referee A Idiosyncratic way Referee B
Agreement
Applicant X
of describing people
Figure 6.5 Schematic representation of idiosyncrasy in references Source:
Data from Baxter et al. (1981).
0470861630_07_cha06.fm Page 129 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
129
It is not hard to list reasons why references are so frequently favourable: • Referees are usually nominated by the candidate, who will obviously chose someone likely to give them a good reference. There is no reason why employers should not ask other sources, i.e. ones the applicant has not nominated. • The expectation in many organisations is that managers will give people good references to help them get promotion or a better job. A reference that is less than 100% favourable, that mentions, however fleetingly, any faults, will hinder this, and may be seen as unfair by staff. • These days many employers fear an unfavourable reference may result in a libel case. But if referees are reluctant to say anything negative, references must remain a poor source of information. Murphy and DeShon (2000) note that performance appraisal has a similar problem—pervasive leniency—and suggest some reasons why. Managers can observe employees’ shortcomings but have no incentive to communicate them to others, and many reasons not to: fear of creating ill feeling, fear of recrimination or retaliation etc. Murphy’s argument implies that references could be an excellent source of information, if only referees could be motivated to part with it.
Recommendation: Do not expect too much from references. In particular, do not assume that a ‘good’ reference necessarily means the applicant is good.
Unstructured References Structured references use ratings, so they are easy to quantify and analyse statistically. The traditional open-ended reference is much more difficult to analyse. Analysis of 532 letters describing 169 applicants for psychologist posts at a US university gave discouraging results. These proved to have near zero inter-referee agreement, and zero accuracy when correlated with number of publications. However, the analysis was very crude; researchers simply made global ratings of the favourability of each letter. There is obviously much more to reference letters than global favourability. All HR managers are familiar with the complexities of hinting at weaknesses in an ostensibly favourable reference. The case study at the end of the chapter gives some examples. The problem with private languages is that they are easily misunderstood. Using a private language is sometimes seen as a defence against legal consequences. This may not work if the applicant collects enough references to establish a pattern, and shows that ‘moderately competent’ is the least favourable judgement the referee ever expresses, and that 95% of candidates get something more positive.
Recommendation: Be wary of private languages in references.
0470861630_07_cha06.fm Page 130 Saturday, December 4, 2004 3:22 PM
130
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
IMPROVING THE REFERENCE Various attempts have been made to improve the reference, with mixed results.
Forced-Choice Format The referee is presented with a checklist, of pairs of statements, and asked which better describes the applicant (Figure 6.6). The statements are equally favourable (or sometimes equally unfavourable), so the referee cannot be consistently lenient, i.e. only express favourable opinions of the applicant. Ideally this format will generate a picture of the applicant’s approach to work which can be checked against the job’s requirements, but does not suggest to the referee what answers to give to create a ‘good reference’. Forced-choice format has one major drawback: it irritates people who have to use it, and who argue—correctly—that both statements apply to Smith, or that neither does. The only published research on this forced-choice format found that it predicted performance ratings four months after hire quite well in university clerical workers (Carroll and Nash, 1972). However, we need more than one small-scale study.
Key Word Counting Researchers analysed 625 reference letters for engineering applicants, and found five themes that distinguished good from poor candidates (Peres and Garcia, 1962) (see Table 6.1). Michael Aamodt and colleagues (1993) found that these themes could be used to score references in a new, more effective way to select trainee teachers. The more mental agility and urbanity key words the reference contains, the better the person did in teacher training. Mental agility word count predicted mental ability, while urbanity word count predicted actual teaching performance ratings. Key word counting may partially solve the leniency problem, by allowing a referee who wants to be really positive to say someone is intelligent not just once, but four times, using different words. Key word counting only works with free-form references, and has difficulties with the documented idiosyncrasy of reference writers. Some referees tend to describe people in terms of conscientiousness more than others, so
Tick one (and one only) statement on each line to describe [J. Smith] has many worthwhile ideas
completes all assignments
always works fast
requires little supervision
talks too much at meetings
often misses deadlines
resistant to new ideas
frequently unpunctual
Figure 6.6 A forced-choice format reference request
0470861630_07_cha06.fm Page 131 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
131
Table 6.1 Five themes used in word counting in letters of reference Cooperation
Mental agility
Urbanity
Vigour
Dependability
good-natured accommodating congenial likeable cooperative
imaginative ingenious insightful knowledgeable intelligent
talkative chatty forward bold sparkling
hustling active energetic self-driving vigorous
precise persistent methodical tenacious determined
Source:
Data from Peres and Garcia (1962).
the employer needs a baseline for each referee. Word counting by hand is extremely laborious, but scanning software is now widely available, and is used to process application forms. The same software would make key-word counting techniques much more feasible for reference letters.
Relative Percentile Method The Canadian armed forces have recently devised a new reference format that may help deal with the leniency problem—the Relative Percentile Method (RPM). The RPM is a 100-point scale, where the referee says what percentage of young persons aged 16–25 score lower than the applicant on, for example, responsibility. Preliminary research suggests the RPM technique predicts success in the army quite well. The technique may work by allowing people to be lenient—the average percentage given was 80—but also to differentiate at the top end of the scale, giving someone they consider really responsible 95 rather than 85. Traditional thinking dismisses a 100-point scale as far too long, but the extra length may allow raters to combine leniency with differentiation.
What Is the Purpose of the Reference? We have been assuming that employers use references to evaluate potential employees. But perhaps the reference really serves another purpose. The reference gives the employer control over the employee, even after he/she has left. The employer can refuse a reference or give a bad one, and so block the employee’s chance of getting another job. To be certain of getting a good reference, the employee must avoid doing anything that might offend the employer, even after leaving. On this argument, employers use references to control existing staff, not to assess new staff. The reference tells the new employer that Smith was not a problem, but is not meant to give a detailed picture of Smith’s performance at work.
REFERENCES AND THE LAW Once upon a time, one employer could tell another prospective employer ‘We dismissed John Smith because we suspected him of improper behaviour with children.’
0470861630_07_cha06.fm Page 132 Saturday, December 4, 2004 3:22 PM
132
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Smith’s career in teaching would be at an end. If Smith really had behaved improperly with children, he should be excluded from work with children, and other employers should be warned about him. But suppose the employer’s suspicions were unfounded? And Smith was perfectly safe with children? In former times Smith might never find out what his employer had said about him, because he had no legal right of access to the files. In the USA specialised agencies (one called BadReferences.com!) exist that enable applicants to check their own references—very useful if you think a former employer is blocking your job applications by giving you a bad reference. If Smith had succeeded in getting a copy of the damning reference, he would have found he could not sue for the loss of his job and career, because employers were permitted to communicate honestly held suspicions about employees to other prospective employers. Suppose the employer simply disliked Smith and made up the story about improper behaviour to justify dismissing him, and to blight his future career. If Smith could prove the accusation was false, and that the employer knew it was false, he might have a case for libel. Proving malice, that the employer knew the accusation was false, is very difficult, so libel cases involving references were rare. That was how things used to be. Today our school teacher would find it easier to regain his good reputation, because changes to the law in the UK mean that employers have less protection, and applicants have more chance to sue over a ‘bad’ reference: • The (UK) Data Protection Acts have given employees some rights of access to personal data held on them by employers. At the time of writing no one knows for certain how this will affect reference letters; it has been suggested that the employer who writes the reference is not obliged to show it to the person described, but that prospective employers who receive it are (although they can remove the author’s name). From the applicant’s point of view it makes little difference who is obliged to show him/her the reference, so long as someone is. • The case of Spring v. Guardian Assurance (1994) imposed on employers a ‘duty of care’ when writing references. The employee can sue on the basis of statements he/she considers to be false, and which show the employer did not take sufficient care when preparing the reference. The employee can avoid the near impossible task of proving malice.
Recommendation: Be careful what you say in references.
Minimal References In both the UK and the USA more and more employers are providing only minimal or ‘name, rank and number’ references. The employer provides only verifiable factual information, on dates of employment, job title and possibly salary, and declines to express any opinion about performance at work. Employers typically decline even
0470861630_07_cha06.fm Page 133 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
133
to give information about sickness and absence (despite this being factual and verifiable). This renders the reference next to valueless as a selection assessment. The effective collapse of the reference system will deprive good employees of an opportunity to let other employers know about their virtues. It will allow people who lose their jobs for sexual abuse of children or violent behaviour at work to get another job, and do the same thing again, because the next employer is not warned about them. To try to preserve the reference system, many states in the USA have passed immunity laws that restore ‘privilege’ to reference letters, meaning that employees cannot sue if the information is given in good faith, even if it proves not entirely correct. These laws require the employer to show the reference to the employee and to allow him/her to correct it, or to include his/her version of events. To date these laws have not had much impact. It appears American employers do not want to be sure of winning defamation cases involving references—they do not want to get involved in litigation at all. American courts have also introduced the concept of negligent referral—failing to disclose information to another employer about an employee’s misconduct at work. This means employers risk being sued by the employee if they write a bad reference, and being sued by the next employer if they do not!
Telephone References A spoken reference is often favoured on the grounds that it leaves no written record for lawyers to ‘discover’, and so is legally safer. It may not be. You can still be sued for what you have said, and people can tape record your telephone reference. Lack of a record of the conversation also means you cannot prove what you said and what you did not say. Hence lawyers recommend employers not to use spoken references, and some organisations forbid them.
CONCLUSIONS Can anything be done to make the reference more useful as an assessment? Previous employers should have access to invaluable information, about work performance, attendance, punctuality, ‘citizenship’ etc., collected over long periods of time. How can they be persuaded to communicate it? We can offer a few suggestions based on past experience.
Multiple References Send for as many references as possible; approach all past employers, not just the ones the applicant nominates. If the applicant has bad habits, one of the employers may both recall this and feel motivated to tell you.
0470861630_07_cha06.fm Page 134 Saturday, December 4, 2004 3:22 PM
134
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Multiple Assessors Use a panel of assessors to read all the references, assess them separately, then discuss their assessments. The British Civil Service found this a good way of extracting information from references.
Structure the Reference Give the referee some indication of what you want to know, and possibly include a rating scale or other numerical estimate. But also allow the referee to use his/her own words. If you are using a competence framework, acknowledge that the referee may have nothing to tell you about e.g. behaviour under pressure, by including a ‘no information’ option.
Personnel Records Ask for details of attendance, sickness, punctuality records and performance appraisals. Most employers will be unable to give these because their rules will not allow it, or because they have agreed with staff not to. Ask the referee to consult these sources of information before writing the reference. It is probably premature to recommend forced-choice format or the Canadian Relative Percentile Method until we have more extensive evidence that they work in practice. The reference is certainly one of the most under-researched areas in personnel selection. There is barely enough research to justify a summarising review of validity. Little research has been attempted on the widely used free-form reference. Promising leads, such as forced-choice format, are not followed up. Sample sizes are often barely adequate. Yet research is urgently needed, because most employers still use references. This includes psychologists, both in the UK and the USA, who continue relying on references to select students and each other.
PEER ASSESSMENTS The other way of getting information from people who know the applicant well is peer assessment. Peer means someone doing the same job, or applying for the same job, as the person being assessed, i.e. someone on the same level as the applicant. Peer assessment is very cheap. The selectors get between 10 and 100 expert opinions about each applicant, entirely free. Information can be collected in various formats: • Peer nomination, where each person nominates the best or worst performer in the group, or the most and least likely to crack under pressure, or the best and worst salesperson.
0470861630_07_cha06.fm Page 135 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
135
• Peer ranking, where each person in the group rank orders the whole group from best to worst, usually excluding themselves. • Paired comparisons, where people are asked to consider others in pairs, and say whether Smith or Jones is the better candidate for officer training. This is an easier decision to make, but can get laborious if the group is large. • Rating scales, where each person is separately rated for managerial ability, or ability to get on with others, or resilience under pressure. This does not involve comparing people with each other.
Military Research The US and Israeli armed services have researched peer assessments extensively. Early research found peer ratings of US Marine officers predicted both success in officer training and combat performance better than several objective tests (Williams and Leavitt, 1947). Israeli military research has reported very high correlations between peer evaluations and admission to officer school, in large male and female samples (Tziner and Dolan, 1982). Peer assessments gave better predictions than almost any other test, including mental ability tests, interviews and rating by commanding officer. Reviews of US army and navy research find correlations with work performance centring in the 0.20s and 0.30s (Kane and Lawler, 1978; Lewin and Zwany, 1976). The most recent research reports that peer ratings predict success in Special Forces training in the US Army quite well, whereas ratings by army staff assessors did not predict success at all (Zazanis et al., 2001).
Civilian Research The earliest review from the late 1970s analysed 19 studies and concluded that peer nominations achieved good validity (Lewin and Zwany, 1976). Peer nominations predicted objective outcomes—graduation, promotion, survival—better than they predicted supervisor ratings. Nominations for specific criteria—‘will make a good manager’—were more accurate than nominations for vague criteria—‘extravert’, ‘emotional’. Peer nominations were best used for predicting leadership. Other reviews summarised research in a range of occupations—MBA graduates, managers, life insurance agents, sales staff, pharmaceutical scientists and secretaries. Figure 6.7 shows that peer ratings predicted three aspects of job performance very well. League tables of assessment methods find peer ratings do better than many other methods: biodata, reference checks, college grades or interview. Why are peer assessments such good predictors? There are several theories: • Consensus: traditional references rely on two or three opinions, where peer assessments use half a dozen or more. Multiple assessors cancel out each other’s errors. • Best versus usual performance: when supervisors are present, people do their best. Peers can often observe what people do when the supervisor is not present.
0470861630_07_cha06.fm Page 136 Saturday, December 4, 2004 3:22 PM
136
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Promotion
Training grades
Supervisor rating
0
0.1
0.2
0.3
0.4
0.5
0.6
Figure 6.7 Validity of peer ratings for predicting three aspects of work performance Source:
Data from Hunter and Hunter (1984).
• No place to hide: in the military, the group is together, 24 hours a day, faced with all sorts of challenges—physical, mental, emotional—so they get to know each other very well, and cannot keep anything hidden. • Motivation: peer assessments may work well in the army, because soldiers know what is needed in an officer, and because they know their own survival may one day depend on being led by the right person. • Popularity: popular people get good peer evaluations and also get promotion. This is not necessarily mere bias, if making oneself liked is part of the job. Peer assessment has two big disadvantages: • It can be unpopular. Peer ratings are unpopular if they are used to make decisions about promotion etc. Peer ratings are more acceptable if they are used to develop staff rather than to select them. This limits their value as a selection tool. Peer assessment has been most popular in the armed services, where people are less likely (or less able) to complain. • It presupposes that applicants spend long enough together to get to know each other really well. These two features of peer assessment tend to limit it to predicting promotability in uniformed, disciplined services, such as police, army or fire fighting. In practice, peer assessments are not used very widely, if at all, to make routine selection or promotion decisions. Probably no one really believes peer assessment will work if it is used, and known to be used, within an organisation year in, year out.
Recommendation: Think very carefully before using peer ratings to make selection or promotion decisions.
0470861630_07_cha06.fm Page 137 Saturday, December 4, 2004 3:22 PM
REFERENCES AND RATINGS
137
CASE STUDY The open-ended reference, which many employers still use, offers some scope for communicating meaning indirectly. Table 6.2 lists some examples, collected through colleagues, from a range of sectors. Imagine that you are writing a reference and wish to indicate a possible problem without being explicit. This has arisen because: • you have agreed to write a ‘good reference’, or because • the individual in question has a reputation for being rather litigious, and you do not feel you have sufficiently good evidence to back up an openly unfavourable reference, or because • the expectation in your organisation is that references should not be openly negative. Here is your list of problem persons for whom you have to write a ‘good reference’ while trying to indicate that there is a potential problem. What words or phrases or other means could you use to do this? • • • • • •
A sales assistant: suspected of stealing petty cash B office worker: habitually late or absent altogether C technical expert: not very bright D office manager: very argumentative and difficult to get on with E middle manager: drinks too much, sometimes during work hours F primary school teacher: suspected of improper behaviour with pupils.
Now show your letters to a colleague and ask them whether they would employ that person, and if not, why not? This will show whether your coded message has been understood.
Table 6.2 Some typical reference code words and phrases Coded reference
Real meaning
Sets very high standards Laid back Convivial Popular Unusually innovative Moderately good performer Fairly punctual Hasn’t let personal problems interfere with work Has reviewed his personal priorities Does his best work in the morning Very thorough
Difficult to work with Lazy Drinks too much Lets people get away with things Keeps coming up with silly ideas Abysmal Constantly late Has let personal problems interfere with work Has given up altogether Gets drunk most lunchtimes Obsessive–compulsive
0470861630_07_cha06.fm Page 138 Saturday, December 4, 2004 3:22 PM
0470861630_08_cha07.fm Page 139 Thursday, December 2, 2004 8:32 PM
CHAPTER 7
Competence Analysis
OVERVIEW • It is absolutely vital to decide what you are looking for before starting any selection programme. If you fail to do this, you will be unlikely to make good selection decisions, and you will be unable to justify your selection methods if they are questioned or become the subject of legal dispute. • Conventional job descriptions and person specifications are better than nothing. • Competence analysis can identify the main themes in a specific job, or whole sets of jobs, or work in general. • Quantitative or statistical analysis is often useful to make sense of large sets of data about work. • Competence analysis methods include some that are fairly open ended (e.g. critical incident technique) and some that are more structured (e.g. Position Analysis Questionnaire). • Competence analysis can improve selection systems. • Competence analysis has many other uses besides guiding selection. • Competence analysis needs to look forwards as well as backwards. • Writing your own competence framework using Personal Construct Psychology and the Repertory Grid Technique.
INTRODUCTION In workplace assessment, HR starts with a competence analysis, which identifies the main dimensions of effective job performance. Competence analysis is the core of HR. It shapes selection, promotion and performance appraisal, as well as training and succession planning.
0470861630_08_cha07.fm Page 140 Thursday, December 2, 2004 8:32 PM
140
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
JOB DESCRIPTION AND PERSON SPECIFICATION Traditional British practice recommended selectors to write a job description and a person specification. As the names imply, the job description focuses on the role, while the person specification describes what sort of person is needed. Job descriptions start with the job’s official title—Director of Civil Nuclear Security—and explain, if necessary, what the organisation does—‘ensure nuclear materials at licensed civil nuclear sites, and in transit are protected against criminal or malevolent acts’— before listing the job’s main duties: • lead 30 specialist staff assessing intelligence reports, setting security standards and undertaking compliance inspections; • advise government departments; • represent the UK internationally; • provide assistance to certain other countries. The classic triad in selection (Figure 7.1) is: • Job analysis and description of abilities, skills, personality traits and competencies required in the job; what the job demands of a person. • Person specification drawn from the job analysis to match those attributes above. • Selection to choose the person most closely matching the job analysis and demands of the job. In terms of selection, this model is still favoured by many organisations because of its systematic and time-served approach. As work and jobs have become more complex, however, this classic model has moved forward. Critics have noted that it ignores two main developments in selection and assessment practice: • the rise of psychometric testing; • an increasing understanding that behaviour in the workplace is very different from behaviour as measured by selection and assessment centres.
Job analysis
Person specification
Figure 7.1 The classic triad in selection
Selection
0470861630_08_cha07.fm Page 141 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
141 Define competency in the job
Assess candidates for competence
Evaluate job performance in terms of competence
Figure 7.2 The competence triad
We can therefore set up another triad, a competency triad (Figure 7.2), in order to apply competencies to selection and assessment. Let us take the first element of this triad, define competency in the job, and apply the competency model to a typical selection exercise, selecting a Business Development Executive. Chapter 8 will consider the second element in the triad, how to assess candidates for competence, and Chapter 14 will discuss the third element, evaluation of job performance in terms of competence.
How to Define Competency in the Job Job analysis will produce a job description like the one in Figure 7.3. In this example a training organisation, Newtrain, operating in the HR sector is looking to recruit a Business Development Executive.
Extracting Competencies from the Job Description Scanning the job description in Figure 7.3 for competencies will produce a list for further analysis, shown in Table 7.1. Identifying competence clusters can reduce and focus this list to six essential or core competencies, shown in Table 7.2. We now have a manageable total of six core competencies. These competencies need to be interpreted internally to take account of company products and culture. For example, with regard to service calls, Newtrain will have established ethical procedures for contacting existing clients so as not to appear too pushy, leading to over-communication and potential annoyance. Job descriptions and person specifications can sometimes prove less useful in planning assessment, for several reasons. • They list every duty—important or unimportant, frequent or infrequent, routinely easy or very difficult—without indicating which is which. • They often lapse into a vague jargon of ‘liaising’, ‘resourcing’, ‘monitoring’ etc., instead of explaining precisely what the job consists of.
0470861630_08_cha07.fm Page 142 Thursday, December 2, 2004 8:32 PM
142
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• They aim to list anything the person might ever be asked to do, so employees cannot subsequently say ‘that’s not part of my job’. • Many person specifications waste time saying the applicant must be keen, wellmotivated and energetic. Are there many employers who want idle, unmotivated employees? • Some person specifications seem rather obscure to those not in the know, mentioning things like ‘hard-edged decision maker’, ‘charismatic ambassador’ or ‘inspirational retailer’. How would we set about deciding whether Mr Smith or Ms Jones fits any of these descriptions?
Newtrain Business Development Executive Title: Business Development Executive Reporting to: Director of Sales Context: Newtrain offers public and in-house training, learning and development courses to HR professionals. Turnover in 2004 was £3 million. The post holder will be expected to add £2 million to this figure by April 2006.
Direct reports: • • •
Business Development Manager (1) Key Accounts Manager (1) Administrators (2)
Duties: • •
To open up and develop new business. To maintain and extend where possible Key Account Training, Learning and Development contracts.
Job content: • • • • • • • •
Research and contact new business via HR Managers in specified sectors, such as agriculture. Identify potential training, learning and development needs in each industry sector. Target marketing to each chosen industry sector. Establish reference points and contact names with new prospects. Contact new prospects to identify training, learning and development needs. Manage the Business Manager, set targets, follow up and appraise. Manage the Key Accounts Manager, set targets, follow up and appraise. Report weekly to Director of Sales.
Budgetary: • •
To prepare annual budget for business development. To monitor and manage the budget.
Performance standards: • • • •
Ten new businesses to be contacted per month. Two visits to new businesses per week. Two visits to existing accounts with Key Accounts Manager per week. Attendance at one training course per month.
Figure 7.3 Job description for Newtrain Business Development Executive
0470861630_08_cha07.fm Page 143 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
143
Table 7.1 Competencies emerging from the job description of Business Development Executive Key competencies Managing others Opening up accounts Developing business Prospecting Customer contact Servicing client needs Researching new clients Marketing services Selling services Budgeting Business planning Source:
Data from Sparrow et al. (1982).
Table 7.2 Six key competencies for Newtrain business development post Competence cluster
Comprising
Business development
Researching, planning, prospecting, opening new accounts, contacting prospects, meeting prospects, developing business Targeting sectors, companies and people, identifying products, identifying areas and regions, pricing, producing marketing materials, distributing materials to target sectors Creating leads from prospect lists, visiting companies, selling to HR managers Following up after training, learning and development events and courses, establishing further needs Managing three reports, checking, setting targets, appraising, motivating and leading, managing upwards to sales director Creating business plans, allocating financial priorities, budgeting, monitoring expenses, credit control activities, producing monthly financial reports
Marketing services
Selling services Servicing Managing others
Budgeting
Recommendation: If you use job descriptions and person specifications, ask youself how useful they are in planning selection assessments, and how they could be made more useful.
0470861630_08_cha07.fm Page 144 Thursday, December 2, 2004 8:32 PM
144
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
COMPETENCE ANALYSIS The traditional job description/person specification system needs to be supplemented by something more detailed and systematic. This can be crucial for selection because: • if the organisation is not clear what it is looking for in new staff, then it has no idea what assessments to use; • without competence analysis the organisation may be unable to justify use of selection methods if challenged. The competence movement began in the 1980s when government policy in education and training evolved a competency-based framework of National Vocational Qualifications (NVQs). Standards to be achieved are set by ‘lead bodies’, such as CIPD or the NHS. The National Health Service lists 35 separate main competences for health care assistants such as ‘assist clients to access and use toilet facilities’. The Management Charter Initiative (MCI) has defined detailed competence standards for managers. Figure 7.4 presents an extract. In Figure 7.4 the overall competence ‘identify training and development needs’ is divided into two: ‘identify organisational requirements’ and ‘identify the learning needs of individuals and groups’, which are further subdivided into five more units of competence. These five will in turn be split into individual standards of competence required by the organisation in question. In this way a national framework can be adapted by an organisation to suit its own business objectives. Competencies are generated and standards derived from those competencies which will set out the workplace behaviours necessary to meet performance requirements. Typical definitions of competence include: • An observable skill or ability to complete a managerial task successfully. • An underlying characteristic of a person which results in effective and/or superior performance in a job. Agree and obtain support for contribution of T&D to organisational strategy Identify organisational requirements for T&D
Identify organisational T&D needs Agree priorities for developing T&D function
Identify training and development (T&D) needs
Identify the current competence of individuals Identify learning needs of individual and groups Agree individuals’ and groups’ priorities for learning
Figure 7.4 Extract from Management Charter Initiative competences for management
0470861630_08_cha07.fm Page 145 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
145
In the selection situation competences could include: • knowledge or specific skills employees will acquire as part of their training but not necessarily possess beforehand (e.g. knowing how to change a car tyre and balance the wheels); • generalized skills required in the organisation; for example, school teachers need competences of communicating well with children, people skills, emotional intelligence; • underlying abilities that make it easier for a person to acquire more specific skills (e.g. openness to change, listening ability or flexibility); • personality characteristics (patience, tolerance); • emotional intelligence (self-regard, emotional self-awareness, assertiveness, empathy, reality testing, interpersonal relationships).
Recommendation: If you use competencies, ask yourself what sort of competences they are, in terms of the list above.
Competence or Competency? In the UK people talk of competences, meaning achieving an acceptable standard for which an NVQ might be awarded; the model covers all jobs. In the USA people talk of competencies, meaning achieving a very high standard, and are more likely to be talking about managers and professionals.
COLLECTING INFORMATION FOR COMPETENCE ANALYSIS Competence analysis can use a wide range of inputs, listed in Table 7.3.
Recommendation: If you have a competency framework, do you know how it was drawn up? If not, make enquiries.
Critical Incident Technique Critical incident technique (CIT) is the oldest job analysis technique, devised to analyse failure in military pilot training during World War II. Simply asking the trainers to give reasons for failure got answers that were (a) too vague to be helpful—‘poor judgement’—or (b) completely circular and uninformative—‘lack of inherent flying ability’. Researchers looked for a better way of identifying flying’s critical requirements. They came up with the idea of collecting accounts of critical incidents which caused recruits to be rejected: what led up to the incident, what the recruit did, what
0470861630_08_cha07.fm Page 146 Thursday, December 2, 2004 8:32 PM
146
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 7.3 Ways of collecting information for competence analysis Method
Comment
Observation Video recording Structured questionnaires
Watching the job being performed Accurate but time consuming Completed by employers, supervisors or workers; list the job’s activities or tasks Listing the day’s activities, suitable for jobs with very little structure Open-ended, can take account of things that observation misses: plans, intentions, meaning and satisfaction Can collect a lot of information in a short time Taking part in the work, or spending time alongside someone doing it. Sales, accidents etc. See p. 145 See p. 146
Diaries or logs Interviews
Group interviews Participation Written records Critical incident technique Repertory grid
the consequences were, and whether the recruit was responsible for them. Typical incidents included: • • • •
trying to land on the wrong runway; coming in to land too high; neglecting to lower the undercarriage; coming in to land too fast.
In most versions of CIT, examples of particularly good, as well as notably poor, performance are collected. Hundreds, or even thousands, of incidents are collected, then sorted by similarity to identify the main themes in effective performance—in other words, the competences needed for the job. CIT is the basis of behaviourally anchored rating scales (BARS) and behavioural observation scales (BOS), both used in performance appraisal (Chapter 14). CIT is also the first stage in devising some structured interviewing systems (Chapter 10). CIT is open-ended and flexible but can be time consuming.
Repertory Grid Technique Repertory grid technique (RGT) is based on George Kelly’s personal construct psychology. Kelly (1955) argued that we interpret our world by constructing alternative ways of looking at it. These ‘ways’ become constructs, bipolar in nature, e.g. ‘competent–incompetent’. Kelly suggested that a good way to elicit these constructs from people to see how they interpret their world, is to ask them to compare people (or jobs and component parts of jobs).
0470861630_08_cha07.fm Page 147 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
147
RGT is similar to CIT, in being open-ended and versatile, but can help structure the information, and delve deeper into its meaning. It proceeds in a series of linked stages. The informant is asked to: • think of a good, average and poor worker, to think of actual people, but not to name them; • say which two differ from the third; • say how the two differ from the third; • give an example of the difference. Two detailed examples are described in full in the case study at the end of the chapter.
An Example of Large-Scale CA An American power generating plant employs nearly 1900 individuals in 814 different jobs. All employees completed a 754-item questionnaire indicating how often they performed each of nearly 600 tasks. This produced a vast data set, with 754 × 1900 cells—far too large to understand just by looking at it. Two types of statistical analysis were used to extract information. The first (called factor analysis) looks for themes in the varied work of the 1900 employees. It identified 60 themes. For example, the profile for the company’s Administrator of Equal Employment Opportunity showed that his/her work had six themes: • • • • • •
personnel administration; legal, commissions, agencies and hearings; staff management; training; managerial supervision and decision making; non-line management.
Similar profiles were drawn up for every employee. Knowing that a particular job has six main themes gives the selector a much clearer idea how to recruit and select for it. If HR could find a good test for each of the 60 factors, they would have a perfect all-purpose test battery for every job in the plant. Another form of statistical analysis (called cluster analysis) was used to sort employees into groups whose jobs were similar. One cluster comprised: • • • • • • • •
Rate Analyst III Statistical Assistant Research Assistant Affirmative Action Staff Assistant Coordinator, Distribution Service Environmental Coordinator Statistician Power Production Statistician.
0470861630_08_cha07.fm Page 148 Thursday, December 2, 2004 8:32 PM
148
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
These eight jobs had quite a lot in common but they all came from different departments, so their similarity might easily have been overlooked. Knowing which posts have a lot in common helps plan training, staff succession, cover for illness etc.
Recommendation: If you are devising a competency framework for a large and complex organisation, consider using statistical analysis.
Ready-Made Competence Frameworks As is usually the case, a ready-made product is quicker and cheaper to use than a custom-made approach such as CIT or RGT, but may not fit so well.
Dictionary of Occupational Titles/O*NET The (US) Dictionary of Occupational Titles (DOT) provides detailed descriptions of thousands of jobs, e.g.: Collects, interprets, and applies scientific data to human and animal behavior and mental processes, formulates hypotheses and experimental designs, analyzes results using statistics; writes papers describing research; provides therapy and counseling for groups or individuals. Investigates processes of learning and growth, and human interrelationships. Applies psychological techniques to personnel administration and management. May teach college courses.
DOT’s account of a psychologist’s work is actually a composite of several types of psychologist: academic, clinical and occupational; few psychologists do everything listed in DOT’s description. DOT is being developed into a database called O*NET.
Fleishman Job Analysis Survey Fleishman Job Analysis Survey lists 52 abilities, e.g. oral comprehension, each rated on a seven-point rating, with explanations of how each differs from related but separate abilities, e.g. written comprehension and oral expression. Fleishman’s system includes the Physical Abilities Analysis, described in greater detail in Chapter 11.
Personality-Related Position Requirement Form (PPRF) Traditional competence analysis systems tended to emphasise abilities needed for the job; with the growing popularity of personality testing in selection, new systems have appeared that also assess personality requirements. The PPRF contains items in the format:
0470861630_08_cha07.fm Page 149 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
149
Effective performance in this position requires the person to •
take control in group situations
not required/helpful/essential
PPRF confirms that leadership is needed in management but not in cashiers, while a friendly disposition is needed in sales assistants but not in caretakers. Conscientiousness, by contrast, seems needed in every job. These findings may seem ‘obvious’ but PPRF gives selectors the information to justify assessing aspects of personality if challenged.
Job Components Inventory The Job Components Inventory was developed in the UK for jobs requiring limited skill, and has five principal sections: tools and equipment, perceptual and physical requirements, maths, communication, decision making and responsibility.
Henley Managerial Competences Victor Dulewicz at Henley Management College has drawn up a list of 40 competencies for managers in general, including development of subordinates, extra-organisational awareness and business sense. The 40 managerial competences group into 11 ‘supra-competences’.
Position Analysis Questionnaire (PAQ) PAQ is probably the most widely used job analysis technique in the USA. PAQ is completed by a trained analyst who collects information from workers and supervisors. However, the analyst does not simply record what the informant says but forms his/her own judgement about the job. The information PAQ collects covers nearly 200 elements, divided into six main areas (Table 7.4). Elements are rated for importance to the job, and for time spent doing each, amount of training required etc. The completed PAQ is analysed by comparing it with a very large American database. The analysis proceeds by a series of linked stages: 1 Profile of 32 job elements, which underlie all forms of work, for example watching things from a distance, being aware of bodily movement and balance, making decisions, dealing with the public. 2 Profile of 76 attributes the person needs to perform the job elements. Aptitude attributes include selective attention—being able to perform a task in the presence of distracting stimuli. Temperament attributes include empathy and influencing people. The attribute profile provides a detailed person specification.
0470861630_08_cha07.fm Page 150 Thursday, December 2, 2004 8:32 PM
150
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 7.4 PAQ’s six main divisions and illustrative job elements PAQ division
Illustrative job elements
1. Information input
Use of written materials Near visual differentiation (i.e. good eyesight, at short range)
2. Mental processes
Level of reasoning in problem solving Coding/decoding
3. Work output
Use of keyboard devices Assembling/disassembling
4. Relationships with other people
Instructing Contacts with public or customers
5. Job context
High temperature Interpersonal conflict
6. Other
Specified work space Amount of job structure
3 Recommended psychological tests. The attribute profile leads logically to suggestions for tests to assess the attributes. If the job needs manual dexterity, the PAQ output suggests using the General Aptitude Test Battery’s test of gross dexterity. 4 Comparable jobs. The job element profile is compared with PAQ’s extensive database to identify jobs that require similar competences. American researchers analysed the role of ‘home-maker’ or housewife with PAQ, and discovered the most similar jobs in PAQ’s database were all troubleshooting, emergency-handling jobs like police officer, airport maintenance chief or fire fighter. 5 Remuneration. The PAQ database was used to calculate the average salary paid for the 10 jobs most like housewife, and found it was $740 a month, at 1968 prices.
All-Purpose Management Competences Most management-level positions will contain fairly similar competencies, such that a generic framework will probably match most management positions reasonably well. Good time management, for instance, is likely to be needed in most management roles. To construct your own list of competencies we recommend the first step might be to inspect a generic list and then adapt it to suit the required post. A generic list will probably look something like this: • time management, personal organisation; • communication, verbal, written, with all levels of customers and employees; • decision making, problem solving and finding solutions;
0470861630_08_cha07.fm Page 151 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
• • • • • • •
151
writing reports, both for internal purposes and external to customers and clients; making presentations, to internal and external audiences; motivating and influencing others; information technology skills; budgeting and financial control; conducting meetings; planning, setting goals, for self and others.
USES OF COMPETENCE ANALYSIS Competence analysis has many uses in assessment in particular, and HR work in general. Uses directly connected with selection include: • Writing job descriptions which help recruit the right applicants, and help interviewers ask the right questions. • Select or train? Some competences can be trained for in most people; others are more likely to need to be selected for. • Choosing selection tests. Knowing the job’s competences makes it easy to choose the right tests. • Defending selection tests. Why is the employer assessing spatial ability? Because the job analysis has shown that it is essential. Why have you assessed anxiety? Because the competence analysis shows the job creates more stress than some people can cope with. Competence analysis is legally required by the US Equal Employment Opportunities Commission if the employer wants to use selection methods that create adverse impact. • Transferring selection tests. Jobs can be grouped into families, for which the same selection procedure can be used. Competence analysis allow more elaborate selection methods to be devised: • Structured interviews. Structured interviews (described in Chapter 10) are much more accurate than conventional interviews, but require a detailed competence analysis. • Content-valid tests. Careful competence analysis allows selectors to write a test whose content so closely matches the content of the job that it is content valid (Chapter 12), which means it can be used without further demonstration of its validity. Besides helping improve selection, competence analyses are useful in other areas of human resource management: • Vocational guidance. Someone interested in job X, where there are presently no vacancies, can be recommended to try jobs Y and Z, which need similar competences. • Learning and development programmes. Competence analysis identifies jobs with a lot in common, which enables employers to rationalise training provision. • Succession planning. Competence analysis can be used to plan promotions, and to find avenues to promote under-represented minorities.
0470861630_08_cha07.fm Page 152 Thursday, December 2, 2004 8:32 PM
152
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Plan performance appraisal. Competence analysis identifies the dimensions to be rated in performance appraisal. • Pay reviews and grading. Discussions or disputes about pay can be informed by information about pay for other jobs with similar competence requirements. • Future direction. Carrying out a competence analysis focuses the organisation’s mind on what it is doing, how and why.
Recommendation: List the uses you make of your competence framework.
USING CA IN SELECTION Sparrow et al. (1982) used PAQ to identify seven competences in the job of plastics injection-moulding setter in a British plant (Table 7.5), then find a suitable test for each attribute: Raven’s Standard Progressive Matrices for intelligence, an optician’s eye chart for visual acuity etc. This may seem ‘obvious’ but it is surprising how many employers even now do not do this, which places them in a very dangerous position if someone complains about their selection methods. If the employer cannot say why they are using Raven’s Progressive Matrices, they will find it difficult to justify themselves if this selection method creates adverse impact on a protected minority. More ambitiously, job analysis systems can be linked to aptitude batteries, such General Aptitude Test Battery (GATB) or Differential Aptitude Tests (DAT), which measure six or eight separate aptitudes. Comparing PAQ competence profiles with aptitude profiles addresses two questions. 1 Does the competence profile for a job correlate with the aptitude profile for the same job? If PAQ says the job needs spatial ability, do people doing the job tend to have high spatial ability scores on GATB? 2 Does the competence profile for a job correlate with aptitude profile validity for the same job? If PAQ says the job needs spatial ability, do people with high spatial ability scores on GATB perform the job better?
Table 7.5 Competence analysis by Position Analysis Questionnaire for the job of plastics injection moulding setter Competence
Test
Long-term memory Intelligence Short-term memory Near visual acuity Perceptual speed Convergent thinking Mechanical ability
Wechsler Memory Scale Standard Progressive Matrices Wechsler Memory Scale Eye chart at 30 cm Thurstone Perceptual Speed Test Standard Progressive Matrices Birkbeck Mechanical Comprehension Test
0470861630_08_cha07.fm Page 153 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
153
The answer to both questions is yes. This confirms that each job needs a particular set of competences that can be identified by PAQ, and then assessed by a multiple aptitude battery like DAT or GATB. (Some American psychologists would question this, and have argued—see Chapter 3—that general mental ability predicts work performance for all jobs, and that differences in the profile of abilities add little or no predictive power.)
Improving Selection Accuracy From the selector’s viewpoint, competence analysis is worth using if it results in more accurate selection decisions. Three summarising reviews have shown that competence analysis does result in more accurate selection when using personality questionnaires (Tett et al., 1991) structured interviews (Wiesner and Cronshaw, 1988) and situational judgement tests (McDaniel et al., 2001).
LIMITATIONS OF CA FRAMEWORKS Clive Fletcher (1997) said that competence ‘must stand a good chance of winning any competition for the most over-worked concept in HR management’. We can list some criticisms of the competency model: • Some competence systems, notably the NVQs, are primarily geared to training and to awarding qualifications, not to selection. • In many cases, assessment appears to be more important than the learning undertaken by people. This is a potential trap and quite understandable as it is always more difficult to facilitate learning than to assess it. Organisations, therefore, as well as training for competence need to find time for assessment of competence. • Lists of competences are often very long, giving rise to the suspicion that statistical analysis would show that many are highly correlated. This could trap us into assessing more competences than we really need to, and so making our selection assessment longer and more expensive than necessary. • Some systems suffer from bureaucracy, wordiness and over-complication. The language surrounding the competence movement has not helped in encouraging development. Also, the number of different standards of performance that have to be demonstrated is thought by many to be over-complicated. For example, the British Psychological Society Level A Certificate of Competence in Occupational Testing requires competence in no fewer than 98 standards. • Variance in quality of assessment: although assessor training is usually rigorous and thorough, all assessment is subjective and open to human error. • The tension between education (teachers and educators based in Higher or Further Education) versus training (trainers and instructors based in industry), early on set up a certain amount of resistance in academic establishments. Such resistance is now breaking down and acceptance of the competence model is much more widespread.
0470861630_08_cha07.fm Page 154 Thursday, December 2, 2004 8:32 PM
154
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Level of Descriptions From the selector’s point of view, competences are often a very mixed bag: • specific skills or knowledge that workers will acquire as part of their training but would not possess beforehand, e.g. knowing how to serve meals and drinks on an aircraft; • more generalised skills or knowledge that organisations might wish to select for, e.g. communicating well in writing; • aptitudes that would make it easier for a person to acquire more specific competences, e.g. flexibility or ability to learn quickly; • personality characteristics, e.g. resilience, tolerance. Of Dulewicz’s list of 40 managerial competences, no fewer than 10 look suspiciously like personality traits.
A Hammer in Search of a Nail Sometimes competence analysis identifies skills or attributes for which no test is readily available. ‘Awareness of others’ is a necessary competence in many jobs, but is difficult to assess successfully (Chapter 3). Some selectors tend to reverse their perspective in an unhelpful way, and do not ask ‘What does the job need and how can we assess it?’ but ‘What assessments have we got and how can we fit them to the job?’.
Recommendation: Draw up a list of any competences you feel you have difficulty assessing well.
Reliability of Competency Models Reliability means that the chosen competencies, and the methods of assessment, will remain constant, and will continue to select people who are indeed competent. But the organisation must allow for change, and not keep the same competence framework for too long.
Validity of a Competency Model To establish whether a competence model is correct is difficult. The organisation should set up an evaluation process that follows newly appointed personnel for several years in order to monitor performance against the competence criteria. Here are some questions we can ask:
0470861630_08_cha07.fm Page 155 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
155
• Have the competencies been carefully derived, to include the whole range of competencies actually needed on the job? • Will the selection assessment accurately assess those competencies? • Will the chosen competencies ensure effective performance behaviours on the job? • Will the people selected do the job well, over time?
THE FUTURE OF CA Is Competence Analysis Essential? Pearlman and colleagues (1980) suggest that analysis of jobs might not need to be very detailed, for selection purposes. They argue that deciding a job is ‘clerical’ is all the competence analysis needed to choose the right tests to predict productivity. (We should bear in bear that Pearlman’s group thinks that all jobs in the USA could be selected for by a combination of general mental ability and dexterity, which clearly does not leave much scope for detailed competence analysis in guiding choice of test.) In the UK Mike Smith (1994) proposes that mental ability may be a universal of selection, something that’s needed, or useful, in every job. Smith suggests two other universals—energy and work importance. If Smith is right, every selection should automatically include assessments of mental ability, energy and work importance. Even if Pearlman’s conclusions are correct, it might be difficult to act on them at present. Deciding a job is clerical, and using a clerical test for selection, may satisfy common sense, but it may not satisfy equal opportunities agencies, if there is a problem with the composition of the work force. The full detail and complexity of competence analysis may be needed to prove a clerical job really is clerical.
Is Competence Analysis Becoming Obsolete? In 1981 Latham and Wexley described a long rating schedule for janitors in which item 132 read Places a deodorant block in urinal almost never 1 2 3 4 5 almost always The rest of the janitor’s day was documented in similarly exhaustive detail. Management and HR have begun to question this type of approach. Rapid change implies less need for task-specific skills and more need for general abilities, to adapt, solve problems, define one’s own direction and work in teams. Competence analysis may be backward looking and may encourage cloning, whereas organisations need to be forward looking.
0470861630_08_cha07.fm Page 156 Thursday, December 2, 2004 8:32 PM
156
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Current management trends also suggest a shift to assessing broad personal characteristics, rather than long lists of specific skills or competences. • We assume the job has a separate existence, apart from the employee who holds it, whereas organisations are being ‘de-jobbed’, meaning employees work on a fluid set of activities that change rapidly so no job descriptions exist. • Total Quality Management (TQM) emphasises customer service skills, selfdirection, self-development and team development skills. • The quest for High Performance Organisations—a buzz phrase in the USA in the 1990s—lists the qualities employees need: teamwork, customer service and leadership. • Wood and Payne (1998) suggest the key dimensions selectors will be assessing in future will be integrity, learning ability, creativity and resilience (because more and more employees are claiming compensation for the effect of work stress).
Recommendation: Ask yourself if your competence system is backward looking, and whether it will help you cope with rapid change or not.
CASE STUDY Establishing a Competency Framework Repertory Grid Technique (RGT) is open-ended and versatile. We will describe two examples, one for analysing the work of an ambulance crew, and the other for a recruitment consultant. In the first case, the ambulance crew, the elements are people: good, average and poor workers. In the second case, the recruitment consultant, the elements are parts of the job rather than people.
Case 1: Ambulance Crew In stage 1, the informant is asked to think of three actual ambulance workers, one good, one average and one poor (but not to name them), then three supervisors and three managers. These nine roles are at the top of the grid in Figure 7.5. In stage 2, the informant is asked to: • take the triad good ambulance worker, average ambulance worker and poor ambulance worker; • say which two differ from the third; • put a name to the difference; • give an example.
0470861630_08_cha07.fm Page 157 Thursday, December 2, 2004 8:32 PM
COMPETENCE ANALYSIS
Ambulance worker
157
Ambulance supervisor Ambulance manager
Good Average Poor Good Average Poor Good Average Poor Construct [ ✓ ] [
] [
]
✓ [ ✓]
✓ [
]
✓ [
✓
✓
✓
]
✓
Committed–lacks commitment Fair–has favourites
[ ✓ ]
[ ✓ ]
[ ]
Incisive–slow to see problems
Figure 7.5 RGT used in job analysis. The elements are various role figures, e.g. good ambulance supervisor. [ ] indicates which three elements are used to start each set. ✓ indicates which roles possess the trait
Our informant says: • good differs from average and poor; • good is committed; average and poor show lack of commitment; • example of ‘being committed’ is being willing to stay on after the end of her shift if needed. We have now obtained our first construct: committed–lacks commitment. Now the informant is asked to apply this construct to the other people in the grid, and tick all those that are committed. The ‘triad’ stage—good/average/bad—can be repeated a number of times. In the second row in Figure 7.5, we see that a good ambulance supervisor can be distinguished from average and poor by ‘fairness’. The opposite of fairness is ‘has favourites’. The informant is asked to apply each distinction to every person in the grid, so it is possible to calculate the overlap of the constructs ‘commitment’, ‘fairness’ and ‘incisiveness’. In the example in Figure 7.5 the three concepts seem independent, i.e. potentially separate competences.
Case 2: Recruitment Consultant In this case, the elements are component parts of jobs. We have a list of 11 main activities in the recruitment consultant job: • • • • • • • •
planning the day; research for clients; giving advice; contacting potential clients; writing reports; routine administration; writing and editing CVs for clients; psychometric testing of clients;
0470861630_08_cha07.fm Page 158 Thursday, December 2, 2004 8:32 PM
158
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• coaching clients; • meeting prospective employers; • selling services. Several people who know the recruitment consultant job well are interviewed to elicit their constructs for the job. The informant is asked to consider three parts of the recruitment consultant’s job—giving advice, contacting potential clients and writing reports—and to say which two differ from the third, and how. A typical response might be: ‘Giving advice and contacting potential clients means working with others, whereas writing reports means working alone’. We have identified the personal construct: working with others–working alone. The emergent pole of the construct is working with others and the contrast pole is working alone. We then take a different set of three job activities, and ask the same question again. We get the reply ‘Planning the day and routine administration are concerned with “things and detail”, whereas research for clients is more to do with people’. The second construct then becomes: things and detail–working with people. This response is useful as it clearly indicates that there are two competencies involved. The emergent pole could be summed up as a competence involving administration and the contrast pole a competence around people skills. The construct elicitation process continues when the participants is asked to compare the next triad of job elements. The process continues until all elements have been compared with each other. This process can be simply organised by putting all the elements onto cards and presenting them, three at a time, to the participant. After a little while you can become quite skilled in operating this elicitation process. The important point to remember is that you, the interviewer, must not influence the participant’s responses in any way. If a participant is stuck, which is quite common, or begins to repeat construct ideas, then say quite simply, ‘Can you think of another way they are different?’. The process needs around five or ten people who know the job function well and what competencies are required. Once, say, ten sets of personal constructs have been elicited, you will see that a pattern of what might be called core constructs begins to emerge. These become the basis of the competence framework defining the job of recruitment consultant.
0470861630_09_cha08.fm Page 159 Thursday, December 2, 2004 8:33 PM
CHAPTER 8
Assessment and Development Centres
OVERVIEW • Assessment centres (ACs) are tailor made to assess a range of competences by a range of methods. • ACs have an underlying conceptual matrix linking competences to methods. • We outline the process of planning an AC. • ACs have good validity, although validity may decay with careless usage. • AC validity may have an element of circularity because they tend to compare what management thinks of the applicants during the AC with what management thinks of the candidate subsequently ‘on the job’. • ACs may assess global proficiency in a series of exercises rather than separate competences exhibited in a number of exercises. • Research indicates ACs correlate quite strongly with mental ability. • ACs create little adverse impact and have fewer legal problems.
INTRODUCTION The assessment centre was invented during World War II, more or less simultaneously in the UK and the USA. The British army had traditionally selected officers by Commanding Officer’s recommendation and interview, but by 1942 this system was getting overwhelmed by increasing numbers, and candidates from unfamiliar backgrounds. Psychologists devised the War Office Selection Board (WOSB). which contained a range of assessments: • leaderless group discussions; • command tasks, such as building a bridge across a wide gap with short lengths of timber;
0470861630_09_cha08.fm Page 160 Thursday, December 2, 2004 8:33 PM
160
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• ‘lecturettes’; • interviews; • mental ability tests. The British Civil Service adopted the army model for selecting its most senior ranks, but without the ‘outdoor’ elements of WOSB. The Civil Service Selection Board (CSSB) was a two-day AC, whose elements included group discussions, written exercises, interviews and mental ability tests. In the USA, psychologists were advising the Office of Strategic Services (OSS), forerunner of the CIA, how to select spies. The team identified nine competences in effective spying, including practical intelligence, emotional stability and maintenance of cover. Maintenance of cover required each applicant to pretend to be someone else throughout the assessment; the OSS programme must be unique in regarding systematic lying as a virtue. The OSS selection programme included: • a command task, like WOSB’s but more aggressive: planning how to assassinate the mayor of the nearby village; • extracting and evaluating information from a set of documents; • memorising a detailed map; • a role play where the candidate tries to explain why he has been found in government offices late at night searching secret files, and is aggressively cross-examined by a trial lawyer and police detective. The AC as we know it today took shape in the early 1950s. AT&T’s Management Progress Study (MPS) was intended to assess the long-term potential of existing employees. It included a business game, leaderless group discussion, in-basket test, two-hour interview, autobiographical essay and personal history questionnaire, projective tests, personality inventories and a high-level mental ability test. The MPS assessed over 400 candidates, who were followed up five to seven years later. ACs can be used for selection, promotion or development (assessing the individual’s profile of strengths and weaknesses and planning what further training he/she needs). ACs are sometimes described as ‘extended interviews’, which must prove something of a shock to candidates who find themselves confronted with a whole day of tests and group exercises.
Recommendation: If you are inviting applicants to an assessment centre, tell them how long it will take, and what they will be asked to do.
THE LOGIC OF THE AC ACs work on the principle of multi-trait multi-method assessment. Any single assessment method may give misleading results—some people interview well, while others are good at tests—whereas a person who shows persuasiveness in both
0470861630_09_cha08.fm Page 161 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
161
interview and inventory is more likely really to be persuasive. The key feature of the true AC is the competence × assessment method matrix (Figure 8.1). The AC planners first decide what competences to assess, which are: • • • •
Influence: ability to influence and persuade others. Analysis: ability to analyse information quickly and accurately. Empathy: ability to understand others’ thoughts and feeling. Innovation: ability to generate new approaches and solutions.
The planners select at least two qualitatively different ways to assess each competence. In Figure 8.1, influence is assessed by group exercises, role play and interview, while analytical ability is assessed by in-tray and mental ability test. Two different group exercises are included: • Group exercise ‘Staff’ uses the statements about staff relations listed in Figure 8.2, and is intended to assess in part how well candidates can see the perspectives of different groups of people. • Group exercise ‘Future’ asks applicants to describe the future of human resource management, under two very different scenarios: full employment in an economic boom and high unemployment in deep recession. ‘Future’ is geared to assessing innovation. An assessment centre that does not have a matrix plan like that in Figure 8.1 is not a real AC, just a superstitious imitation of one. In the past some people ‘planned’ ACs using tests and exercises—begged, borrowed or stolen—included because they were available, not because they were accurate measures of important competences. Competence Influence
Analysis
Empathy
Innovation
Group exercise ‘Staff’ Group exercise ‘Future’
Predictor
In-tray Role play (Sales) Innovation exercise Interview Mental ability test
Figure 8.1 The competence × predictor conceptual matrix underlying an assessment centre; × indicates that combination is not assessed
0470861630_09_cha08.fm Page 162 Thursday, December 2, 2004 8:33 PM
162
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Recommendation: Always have an explict conceptual plan for your assessment centre, that links competences to exercises.
COMPONENTS OF THE AC An AC includes whatever exercises are needed to assess each competence twice. Sometimes the exercises can be taken off the shelf; sometimes they are devised specially. ACs use group, individual and written exercises. Group exercises include: • Leaderless group discussions, where applicants achieve consensus on an issue, without voting or appointing a chair. • Revealed difference technique discussions, where applicants first record their own priorities for e.g. improving staff morale, then engage in discussion to agree the group’s priorities. Figure 8.2 shows part of a revealed difference technique used by the authors. • Assigned role exercises, in which each applicant competes for a share of a budget, or pushes his/her candidate for a job. • Command exercises, are often used in military ACs. Applicants lead a group solving a practical problem. In the OSS ‘Buster’ and ‘Kippy’ task the applicant tries to erect a prefabricated structure, using two ‘assistants’, playing contrasting roles: one sluggish and incompetent, the other aggressive and critical. • Business simulations, usually computer based, in which decisions must be made rapidly, with incomplete information, under constantly changing conditions.
Group Exercise ‘Staff Relations’ Place the ten statements below in rank order of importance, for achieving a smooth-running organisation. Write 1 against the statement you think most important, 2 against the next most important, and so on. Order of importance A. A well-run company has no need of trades unions.
[
]
B. The manager’s job is to manage.
[
]
C. Many employees are lazy and will only work if closely supervised.
[
]
D. An organisation needs to give its employees a clear mission to follow. .. .
[
]
J. Employees need the protection of a strong union.
[
]
Figure 8.2 (Part of) a revealed difference technique group exercise that uses statements about staff–management relations. After the applicants have completed their individual rank orders, they are asked to achieve a group consensus on the order of importance
0470861630_09_cha08.fm Page 163 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
163
• Team exercises, in which half the group collectively advocates one side of a case, and the other half takes the opposing viewpoint.
Individual Exercises Individual exercises divide into: • Role play, where the applicant handles a dissatisfied customer or an employee with a grievance. • Sales presentation, in which the applicant tries to sell goods or services to an assessor briefed to be challenging, sceptical or unsure the product is useful. • Presentation, where the candidate makes a short presentation. Sometimes the topic is assigned well in advance, sometimes at very short notice. • Interview. Some ACs contain one or more interviews.
Written Exercises Written exercises divide into: • Biographies, in which applicants outline their motivation, or explain major life decisions. • Psychological tests of mental ability or personality. • In-basket (or in-tray) exercise, in which the candidate deals with a set of letters, memos etc. • Case studies, designed to assess ability to analyse information, understand staff problems, or to present a balanced and persuasive case. The most recent survey of US practice (Spychalski et al., 1997) finds: • • • •
leaderless group discussions and in-tray exercises used in most ACs; presentations and interviews used in about half; tests of skill or ability used in one in three; peer assessments used in only one in five.
Another survey finds that ACs try on average to assess 10 or 11 dimensions— probably too many for the assessors to cope with (Woehr and Arthur, 2003). Most ACs use line or general managers as assessors (rather than HR specialists or psychologists). For selection and promotion ACs, managers should have a clearer, first-hand view of what the organisation needs. In development ACs, line managers’ involvement is often vital, to ensure that the training and development recommended by the AC actually happens.
Recommendation: Ensure you have a pool of suitable line managers who are willing to act as AC assessors, and who have been trained for it.
0470861630_09_cha08.fm Page 164 Thursday, December 2, 2004 8:33 PM
164
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
ASSESSORS’ CONFERENCE The final stage of the AC is the assessors’ conference, when all the information collected about each applicant is collated. The assessors’ conference can take a third of the total time of the AC, for example the whole of the third day in a three-day AC. At the conference, the assessors: • • • • •
resolve disagreements in ratings of group exercises or written work; review all the ratings in the matrix; agree a final set of ratings for each candidate; decide who gets the job, in a selection AC; decide who gives feedback to candidates.
Figure 8.3 shows a typical summary sheet used during an assessors’ conference. The assessors first agree on a rating for each exercise × competence pairing, e.g. influence × group exercise ‘Staff’. This is usually done by discussion and comparison of evidence, and usually is recorded as a whole number. They then discuss all the AC elements assessing e.g. influence, and achieve an overall rating for that competence. Again, this is usually a whole number, and should not be arrived at by simple calculating of an average. In a selection AC, the assessors usually also arrive at an overall rating, which determines whether the applicant is accepted or not. Sometimes the organisation will stipulate that no one can be given an overall ‘acceptable’ rating, if for example their intellectual ability is inadequate (no matter how good their other ratings are). In a developmental AC, the final ratings, in the bottom row of the summary sheet, define the candidate’s development needs. American ACs sometimes use the AT&T model, in which assessors only observe and record behaviour during the AC, but do not make any evaluations until after all exercises are complete. Recommendation: Always allow enough time for the assessors’ conference.
The assessors’ conference may fail to make the best use of the wealth of information collected. ACs used for selecting senior police officers in the UK contain 13 components, including leaderless group discussion, committee exercise, written ‘appreciation’, drafting a letter, peer nomination, mental ability tests and a panel interview. The AC ratings were used to predict training grades and promotion. The assessors’ conference used the 13 components to generate an overall assessment rating (OAR), by discussion, in the usual way. Psychologists also did a statistical analysis of the AC data, and made two interesting discoveries: • only 4 of the 13 AC components were needed to predict the training and promotion outcome, which suggests much of the AC was redundant; • a combination of the 4 successful components predicted the outcome better than the OAR from the assessors’ conference.
0470861630_09_cha08.fm Page 165 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
165
Competence
Predictor
Influence Group exercise ‘Staff’
A1 A2 A3 Overall
Group exercise ‘Future’
A1 A2 A3 Overall
In-tray
A1 A2 A3 Overall
Role play (Sales)
A1 A2 A3 Overall
Innovation exercise
A1 A2 A3 Overall
Analysis
Empathy
Innovation
Interview Mental ability test Overall rating
Figure 8.3 Typical summary sheet used at assessors’ conference. A1–A3 are three assessors, who have all observed the group exercises and assessed the written exercises. The interview is one-to-one, so there is only one rating
This clearly implies the assessors’ conference was not doing an efficient job of processing the information the AC generates, and that their discussion might usefully be complemented by a statistical analysis of the data. However, the assessors’ conference could not be replaced by a simple numerical model, because it is unlikely that anyone would agree to act as assessor if the AC made its decisions that way. Without line managers to act as assessors, the AC risks losing much of its credibility. When ACs are analysed in this way, it often emerges that some exercises either do not predict the outcome at all, or do not add to the prediction made by other exercises, so are redundant. This may be one reason why ACs have tended to get shorter. Two- or three-day ACs, like WOSB or the MPS, have tended to shrink to one day or even half a day.
0470861630_09_cha08.fm Page 166 Thursday, December 2, 2004 8:33 PM
166
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Recommendation: If you have sufficient data, analyse how your assessment centre is working, and how the decisions are being made.
RELIABILITY IN ACS Like all assessment measures, ACs must achieve an acceptable reliability, or they will be unable to predict anything. Reliability of ACs gets quite complex, because we can examine it at several levels. We can calculate the reliability of: • the assessors’ ratings; • the component exercises; • the entire process. Most elements of the AC—group exercises, role plays, in-tray exercises, simulation—are rated by assessors. Ratings of group discussions achieve fair to good inter-rater reliability; an early review found generally good agreement in the short term. Research on the overall reliability of the whole process is mostly British, and is fairly old (Morris, 1949; Wilson, 1948). • CSSB reported fairly good re-test reliability for the AC as a whole, based on applicants who exercised their right to try CSSB twice. • Where two batches of applicants attended both of two parallel WOSBs, there were ‘major disagreements’ over 25% of candidates. • When two parallel WOSBs simultaneously but independently observed and evaluated the same 200 candidates, the parallel WOSBs agreed very well overall; so did their respective presidents, psychiatrists, psychologists and military testing officers. • Later American research compared 85 candidates who attended long and short ACs and were evaluated by different staff. Overall ratings from the two ACs correlated well; so did ratings on specific competences. Done carefully, the AC seems to be a reliable process.
Recommendation: If you have sufficient data, check how well your assessors agree with each other.
0470861630_09_cha08.fm Page 167 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
167
VALIDITY OF ACS Like the interview, the AC is a method of assessment, that can in theory assess whatever competences the organisation is interested in. Hence we should logically ask about the validity of ACs for assessing, for example, delegation, not about AC validity in general. In practice the AC, like the interview, is often used to assess general suitability for the job, and its validity computed against management’s estimates, also of general suitability. Moreover, when the AC tries to assess specific competences, there is some question whether it succeeds in doing so.
Management Progress Study (AT&T) AT&T’s pioneering Management Progress Study (MPS) achieved impressively good validity (Table 8.1). The candidates were originally assessed in 1957; on follow-up, eight years later, the MPS had identified half of those who reached middle management. The MPS AC also identified nearly 90% of those who did not reach middle manager level.
Civil Service Selection Board (UK) Early CSSB intakes were followed up twice. After the first few years, CSSB had achieved good validity, predicting work performance ratings. Figure 8.4 shows that the various components of CSSB also predicted the two-year follow-up— although we must remember these are not independent. The assessors who rated the second discussion also rated the first, so knew how applicants did in that too. The second follow-up was many years later, in the mid-1970s, when those selected in the late 1940s were nearing retirement. All but 21 of 301 CSSB graduates achieved Assistant Secretary rank, showing they had made the grade as senior civil servants; only three had left because of ‘definite inefficiency’. The validity correlation was an impressive 0.66.
Table 8.1 The AT&T Management Progress Study AC rating in 1957
Potential middle manager Not potential middle manager Source:
Bray and Grant (1966).
Achieved rank in 1965 1st line (%)
2nd line (%)
Middle manager (%)
4 68
53 86
46 12
0470861630_09_cha08.fm Page 168 Thursday, December 2, 2004 8:33 PM
168
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
First discussion Appreciation Committee Individual problem Short talk Interview Second discussion Final mark 0
0.1
0.2
0.3
0.4
0.5
0.6
Figure 8.4 Follow-up of CSSB after two years Source:
Data from Vernon (1950).
Senior Police Officers ACs are used for selecting senior officers in the UK, but with disappointing results; the assessors’ overall rating predicted supervisor rating and training outcomes very poorly, although re-analysis suggested that more efficient use of the information could have achieved better results. American research using ACs for entry-level police officer selection has also reported poor results, suggesting selection for police work may present unusual difficulties. We now have enough AC research to calculate an average validity, and to distinguish between different indices of success at work. Overall average validity, based on 50 separate studies, is quite good, a correlation of 0.37. Figure 8.5 (a and b) shows how well ACs predict various outcomes: performance ratings, promotion, rated potential for further promotion, achievement/grades, status change, wages. For most the value is around 0.35–0.40, but where the outcome is rated potential, validity is noticeably higher. Detailed analysis of AC validity research uncovered several factors that contribute to higher AC validity, as well as finding that several other factors that might have been thought to affect validity did not in fact do so. AC validity is higher when: • • • •
more assessments are included; psychologists rather than managers are used as assessors; peer evaluations are used; more candidates are female.
0470861630_09_cha08.fm Page 169 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
169
Training Wages Achievement Potential Promotion Performance 0
(a)
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Stress tolerance Problem solving Organising/planning Influence Drive Consideration Communication (b)
0
0.1
0.2
0.3
0.4
0.5
Figure 8.5 AC validity for (a) different outcomes and (b) different competences Source:
Data from Gaugler et al. (1987), Arthur et al. (2003).
AC validity is not affected by: • ratio of candidates to assessors; • amount of assessor training; • how long the assessors spend integrating the information. The first two findings are a little surprising. They should not be taken as an invitation to use one assessor to ten candidates, or to dispense with training altogether. They probably show that most of the ACs that were studied were using adequate training and assessor/candidate ratios, and that improving on an adequate level does not make the process work better.
0470861630_09_cha08.fm Page 170 Thursday, December 2, 2004 8:33 PM
170
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The most recent review (Arthur et al., 2003) shows how well ACs predict different competences. Most—communication, drive, influence, planning and problem solving—are predicted about as well as performance overall; the exception seems to be tolerance for stress, which is predicted much less well.
Recommendation: If you have sufficient data, consider a follow-up analysis, to find out how well your assessment centre is working.
MAINTAINING AND IMPROVING AC VALIDITY The nature of the AC method makes it easier to lose validity by careless practice. A programme of ACs for school administrators in the USA (Schmitt et al., 1990), centrally planned but locally implemented in 16 separate sites, showed validity is: • higher in ACs that serve several school districts rather than just one; • lower in ACs where assessors have worked with the candidates. Both results suggest that failing to ensure assessors are impartial may compromise validity. The (UK) Admiralty Interview Board used several ways of improving its AC’s validity: • Reduce eight competences to four. Other research has confirmed that using fewer competences gives more accurate ratings. • Require assessors to announce ratings for specific dimensions before announcing their overall suitability rating. • Use nine-point ratings with an indication of what percentage of ratings should fall in each category. Use of video recording should in theory make assessors’ ratings more reliable and accurate, because they can watch key events again to be sure who said what and who reacted how. The only published research finds video recording results in more accurate observations of group discussions, but does not improve the accuracy of ratings (Ryan et al., 1995). Watching video recordings is also very time consuming, and time tends to be short in most ACs.
Recommendation: Resist pressure to increase the number of competences your assessment centre tries to assess.
0470861630_09_cha08.fm Page 171 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
171
DOUBTS ABOUT THE AC METHOD Despite the AC method’s success, some doubts about its effectiveness have been raised.
‘Ipsativity’ One person’s performance in a group exercise depends on how the others in the group behave. A fairly dominant person in a group of extremely dominant people may look passive and ineffective by comparison. Research has confirmed that applicants who perform poorly in an otherwise good group get lower ratings than a poor applicant in a generally poor group. Research also finds assessors’ ratings more accurate when applicants differ a lot, suggesting assessors compare applicants with each other, not with an external standard. The ipsativity problem can be reduced to some extent by rotating applicants between different groups, and by introducing normative data from psychological tests.
Self-Fulfilling Prophecy Smith goes off to an AC, returns with a good rating (predictor), and gets promoted (criterion): a high correlation between AC rating and promotion results automatically. But does it prove the AC is valid? Or simply that AC ratings determine who gets promoted? We have described a blatant self-fulfilling prophecy, which only a very uncritical observer would accept as proof of validity. The self-fulfilling prophecy can be much subtler: people who have done well at the AC are deemed suitable for more challenging tasks, develop greater self-confidence, acquire more skills and consequently get promoted. Many promotion and development ACs suffer from an element of self-fulfilling prophecy, because employers naturally want to act on the results of the assessment. The AT&T Management Progress Study is one of the very few studies that kept AC ratings secret until calculating validity, and so avoided creating a self-fulfilling prophecy.
‘Face Fits’ Most ACs are validated against salary growth, promotions, management level achieved or supervisor’s ratings of potential. Critics have suggested these outcomes ‘may have less to do with managerial effectiveness than managerial adaptation and survival’. On this argument ACs answer the question ‘Does his/her face fit?’, not the question ‘Can he/she do the job well?’. Few studies of AC validity use less suspect outcomes: 12 studies, based on over 14 000 people, used status change and wages criteria, whereas only 6 studies, based on a mere 400 persons, used performance ratings (Schmitt et al., 1984). In a sense, however, even performance ratings are suspect,
0470861630_09_cha08.fm Page 172 Thursday, December 2, 2004 8:33 PM
172
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
because they are still management’s opinion of the candidate. The supervisor rating criterion can be viewed as answering the question: ‘Does Smith make a good impression on management’. The AC—more than other selection tests—addresses the same question, which may make its high validity less impressive. We need to validate ACs against objective criteria, such as sales or output, not against different wordings (potential/performance) of the general favourable-impression-onmanagement criterion. Two studies have used peer and subordinate ratings as critera, and found these agree with AC ratings as well as conventional ratings from above (Atkins and Wood, 2002; McEvoy and Beatty, 1989). It is still possible that peer and subordinate ratings also assess image rather than substance. Only one study has used a truly objective criterion: net store profit, for retail store managers. No amount of impressing higher management will get customers into the shop and buying goods. AC rating predicted profit as accurately (0.32) as it predicted supervisor rating (0.28). This single study suggests the ‘face fits’ criticism is unfounded, but we urgently need more research on AC validity using objective outcomes. The problem is that ACs are most commonly used to assess managers, who often do not do anything or produce anything that can be counted.
THE EXERCISE × COMPETENCE PROBLEM The logic of the AC method implies assessors should rate candidates on competences; research suggests very strongly, however, that assessors often assess candidates on exercises. In Figure 8.6 there are three types of correlations: • aaa correlations are for the same competence rated in different exercises, which ‘ought’ to be high. The technical term for this is convergent validity, because different assessments of the same competence ought to converge. This is what we ‘ought’ to find: applicants show they can be influential in each of several different assessments. • bbb correlations are for different competences rated in the same exercise, which ‘ought’ to be lower; The technical term for this is discriminant validity, because the assessment ought to discriminate between different competences. But remember that many AC competences are conceptually related and so likely to be positively correlated; someone who is influential may well also be empathic. Zero correlations are not necessarily expected at bbb. • ccc correlations are for different competences rated in different exercises, which ‘ought’ to be very low or zero. However, in real ACs what ‘ought’ to happen rarely does. We very consistently find that ratings of different competences in the same exercise (the bbb correlations in Figure 8.6) correlate very highly, showing a lack of discriminant validity, while ratings of the same competence in different exercises (the AAA correlations) hardly correlate at all, showing a lack of convergent validity. ACs are not measuring general ability to influence across a range of management tasks; they are measuring general performance on each of a series of tasks. But if influence in the group discussion does not
Group discussion
Role play
ccc
aaa
Innovation
Influence
ccc
ccc
Empathy
Innovation
aaa
Influence
ccc
bbb
Innovation
Empathy
bbb
Empathy
Influence
Influence
ccc
aaa
ccc
ccc
aaa
ccc
bbb
Empathy
aaa
ccc
ccc
aaa
ccc
ccc
Innovation
ccc
ccc
aaa
bbb
bbb
Influence
ccc
aaa
ccc
bbb
Empathy
Role play
aaa
ccc
ccc
Innovation
bbb
bbb
Influence
bbb
Empathy
In-tray Innovation
Figure 8.6 Three types of correlation in an AC with three competences (influence, empathy and innovation) rated in each of three exercises (group discussion, role play and in-tray)
In-tray
Group discussion
0470861630_09_cha08.fm Page 173 Thursday, December 2, 2004 8:33 PM
0470861630_09_cha08.fm Page 174 Thursday, December 2, 2004 8:33 PM
174
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
generalise to influence in the in-tray, how can we be sure it will generalise to ability to influence on the job? ACs work in the sense of achieving good overall validity, but are not working in the way they are meant to work, which is a source of concern.
Explaining the Exercise Effect Various explanations for the exercise effect have been suggested. Some imply that the problem can be fairly easily solved by changing the way the AC is run.
Overload When assessors are given too many people to watch or too many competences to rate, information overload forces them to simplify the task by rating overall performance rather than separate competences. Recall that the average American AC seeks to assess ten or eleven competences, which may encourage assessors to make global ratings.
Rating the Unratable Asking assessors to rate a competence that is not reflected in an exercise, e.g. innovation in a task that gives no scope for it, encourages them to make global ratings: ‘Smith did well in that so rate him/her high for innovation too’.
Transparency Transparency means telling applicants what competences are being assessed. If they know they are being assessed for persuasiveness in the group discussion, they will try to be as persuasive as possible, which may make it easier for the assessors to rate them accurately. This in turn may mean that ratings will organise themselves around competences more and around exercises less.
Mis-specification ACs do not assess the competences they were designed to assess, but something else, which fortunately is related to job performance. Using a method that works, but not in the way you intend, is unsatisfactory, and could create problems justifying its use when challenged. There might be a quicker, cheaper, better way of assessing whatever ACs assess. Candidates for what ACs ‘really’ assess include mental ability and the ability to present oneself well.
0470861630_09_cha08.fm Page 175 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
175
Competences Do Not Exist Some psychologists do not believe in broad traits or abilities; they believe that behaviour is shaped by the situation people find themselves in, i.e. by the particular demands of the exercises. These psychologists would expect to find exercise effects, and would not be surprised by the absence of competence effects. Some people are good at persuading others in writing, some are good at persuading others in a group discussion, some are good at persuading one person face to face. But people are not necessarily, or even usually, good at all three, so the AC is looking for a consistency—in persuasiveness—that is not there. This argument tends, of course, to undermine the whole notion of competences, and throws doubt on the whole enterprise of trying to select people who possess them.
Solving the Exercise Effect Problem Several recent analyses assess the effectiveness of various attempted solutions to the exercise effect (Born et al., 2000; Lievens and Conway, 2001; Woehr and Arthur, 2003): • The exercise effect is lessened when assessors have fewer dimensions to rate, consistent with the overload hypothesis. • The exercise effect is lessened when the assessors are psychologists and HR specialists, and increased when the assessors are line managers or students. • Research on length of training yields inconsistent results, so we cannot recommend any particular duration of training that will lessen the exercise effect. • Type of training is important: frame of reference training seeks to give all assessors the same view of what behaviour counts as e.g. decision making. Frame of reference training lessens the exercise effect more than training in observation skills. • Where assessors wait until the end of the AC to make ratings, the exercise effect is lessened, compared with the more usual practice of rating after each exercise. In a long AC this might not be practical; assessors might find it difficult to rate behaviour they saw two days and six exercises ago. • Giving assessors behavioural checklists decreases the exercise effect, but not significantly. • Ensuring the behaviour to be rated will be visible in the exercise decreases the exercise effect. Other factors, however, made no difference: • The purpose of the AC—selection or development—makes no difference, although it is more important to achieve good differentiation between competences in a developmental AC, where the profile of ratings is used to plan each person’s training and development programme.
0470861630_09_cha08.fm Page 176 Thursday, December 2, 2004 8:33 PM
176
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• The ratio of candidates to assessors makes no difference; however, the studies reviewed all had fairly low ratios (two or three candidates per assessor), so probably did not suffer overload. • Transparency—telling candidates what competences are being assessed so that they can exhibit them more clearly—does not decrease the exercise effect. Recently two experts have offered radically different perspectives on the exercise effect problem. The first (Lance et al., 2000) argues that ACs have never worked the way they were meant: ‘assessors form overall judgements of performance in each exercise . . . [then] produce separate post-exercise trait ratings because they are required to do so’. If assessors are ‘working backwards’ in this way, it is not surprising that competences within exercises correlate highly. ACs work if the right exercises have been included, i.e. ones that ‘replicate important job behaviors’. On this argument, the AC becomes a collection of work sample tests, and we should abandon the competence framework, which just creates needless confusion for both assessor and researcher. The second (Lievens, 2001a), by contrast, suggests there may not really be an exercise/competence problem. All previous research has used real ACs with real candidates, whose real behaviour is not known. Perhaps they do behave in the way assessors describe, i.e. consistent within exercises but not across them. This research prepared video recordings in which candidates show high levels of some competences over all exercises, and low levels of other competences, also consistently across all exercises, i.e. the candidates behave in the way the logic of the AC expects. Psychologists and managers who rate these tapes correctly report what they see, so generate competence effects in their ratings. This proves observers can make ratings that are organised around competences, but does not explain why they do not make them in most real ACs.
WHAT ACS REALLY ASSESS Recent reviews (Collins et al., 2001; Scholz and Schuler, 1993) compare AC overall assessment ratings with ability and personality tests, with interesting results, shown in Figure 8.7: • Fairly high correlation with mental ability, confirming the suggestion that ACs may be an elaborate and very expensive way of assessing intellectual ability. • In most ACs the assessors know the ability test results before they make their ratings, so ratings could be influenced by the test data. However, there were enough ACs where assessors did not see the test data, to show that this does not reduce the correlation between AC and ability. • Mental ability correlates equally highly with both in-tray and group discussion. This is surprising, because the group exercise performance seems more likely to
0470861630_09_cha08.fm Page 177 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
177
Agreeableness
Openness
Extraversion
(Low) Neuroticism
Mental ability 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Figure 8.7 Correlations between AC overall ratings, and ability and personality Source:
Data from Collins et al. (2001).
reflect personality than mental ability. It tends to confirm the ‘elaborate intelligence test’ view of ACs. • Better AC ratings are linked to extraversion, and (low) neuroticism. • Better AC rating are also linked to some more specific traits, e.g. self-confidence.
Incremental Validity The high correlation between AC and mental ability suggests that ACs may not improve much on predictions made by mental ability tests. However, several analyses have shown that ACs do achieve better validity than psychological tests alone. Studies on the CSSB and the Admiralty Interview Board found tests alone gave poorer predictions than the AC. Data for the Israeli police force find incremental validity of a very thorough AC over mental ability tests; this AC focuses on dealing with people under very difficult circumstances.
FAIRNESS AND THE AC The AC is generally regarded as fair, meaning it creates no adverse impact on women or minorities. The AC has high face validity, which probably accounts for much of its popularity, and also probably gives it a measure of protection against claims of unfairness. Fighting one’s case in a committee, chairing a meeting, answering an in-tray or leading a team all have obvious and easily defended job relevance. Critics note that ACs are most often used for management, at which level adverse impact on minorities may be less of an issue.
0470861630_09_cha08.fm Page 178 Thursday, December 2, 2004 8:33 PM
178
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Gender ACs do not seem to create adverse impact—as many women as men get good ratings: • In a follow-up of 1600 female entry-level managers in Bell Telephone (Ritchie and Moses, 1983), AC ratings predicted achieved rank seven years later as well as in AT&T’s study of male managers. • In 1035 applicants for financial services sales posts, women got better assessments, but only from all-male assessor panels, which suggests an inverted bias at work. • Women get better ratings in an AC, but on follow-up five to ten years later had not advanced any further up the organisation than men, which suggested a bias at work, but not during the AC itself. • Dutch and German research (Lievens, 2001b) also finds either no gender differences in AC ratings, or a slight tendency for women to get better ratings.
Race White/minority differences are sometimes found in US research, but not always. It probably depends on the mix of components in the AC. AC exercises that correlate more with mental ability create larger white/African American differences; partialling out mental ability removed the ethnicity differences without reducing the AC’s validity. This implies that AC exercises that are not too permeated with mental ability may solve the adverse impact problem.
Age AC ratings sometimes show small negative correlations with age, which may create problems in countries with age discrimination laws, although no such problems have come to light as yet.
UK Data Sue Scott (1997) has gathered extensive data on adverse impact in graduate recruitment in the UK for 14 major employers in manufacturing, retail, finance, transport and law enforcement. Following pre-selection or sifting in, the final hurdle is usually an assessment centre (see Table 8.2). Overall, minority applicants are less successful, showing adverse impact. However, the minorities vary considerably, with (Asian) Indian applicants being as successful as white applicants, and Chinese applicants considerably more successful than white. Black applicants experience adverse impact at the pre-selection stage, being less likely to get a first interview. Note also that the three Indian sub-continent
0470861630_09_cha08.fm Page 179 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
179
Table 8.2 Success of white and minority British applicants to graduate recruitment schemes
White All minorities Black African Black Caribbean Black other All black Bangladeshi Indian Pakistani All Indian sub-continent Chinese
Sample size
Sifted in (%)
Successful (%)
49370 6462 648 162 88 1001 142 1706 530 2378 457
23 18 13 19 18 13 21 28 20 26 26
3.4 1.9 1.7 1.9 2.3 1.6 1.4 3.2 2.1 2.8 5.4
Source: Scott (1997). Note: The ‘all minorities’ figures are higher than the total for the separate groups because some employers do not differentiate between minorities.
groups differ considerably; parallel differences are found in average education and income levels for the three groups.
Recommendation: If you have sufficient data, analyse your assessment centre data for gender and ethniciity differences.
CASE STUDY An Assessment Centre for Assessing NHS Hospital Managers Competences A set of seven competences arrived as a given, which had previously been drawn up by a committee. They were clear and easily assessable, and the list was of manageable length (which is not always the case with sets of competences generated by committees!). 1 Analytical abilities a
Understanding information: ability to absorb information easily, grasp its implications and draw correct conclusions. b Understanding figures: ability to analyse numerical information and extract correct conclusions. c Strategic vision: ability to plan far ahead and anticipate likely problems; ability to see fresh approaches to old problems.
0470861630_09_cha08.fm Page 180 Thursday, December 2, 2004 8:33 PM
180
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
2 Social and personal skills a
Influencing skills: ability to persuade and influence others, on a one-to-one basis, in meetings and to audiences large and small. b Awareness: ability to listen to people and understand others’ points of view, both at an individual level and at a more general ‘stakeholder’ level. 3 Individual abilities a
Energy: ability to deal with a great volume of work, while remaining calm, collected and efficient. b Time management: ability to deal with a variety of tasks and problems simultaneously, to decide priorities and to delegate where appropriate. The next stage was to select at least two assessments for each competence. The two or more assessments should include qualitatively different methods, e.g. a group exercise and a written test. With the exception of one group exercise and the reasoning test, all of the components were written specially. Off-the-shelf exercises are quicker and cheaper, but one cannot be certain that some applicants have not done them before.
Group Exercise: ‘Lost in the Desert’ This exercise uses the revealed difference technique. Candidates individually rank 12 items in order of usefulness in surviving, when their plane has crashed in the Mojave desert, then achieve a group consensus. Some items look useful but are not—a pistol; some look useless but are actually valuable—an overcoat (to keep the sun off during the day and keep warm at night). This exercise is placed first on the timetable, and used partly as an ice breaker, to allow the candidates to settle in. It is only assessed for energy.
Group Exercise: ‘Future’ Each applicant writes his/her thoughts on three headings: 1 The shape of the NHS in 10 years time if the present government remains in power. 2 The shape of the NHS in 10 years time if the main opposition party wins an election in two years time. 3 The shape of the NHS in 10 years time as you would like to see it. The group then draws up a consensus view for each heading that can be written on one sheet of a flip chart. This exercise is especially geared to assessing strategic vision.
Group Exercise: ‘Home’ The group designs a residential facility, that will accommodate as many people as possible, consistent with meeting quality standards. This exercise assesses analytical and
0470861630_09_cha08.fm Page 181 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
181
planning abilities. The task also requires good time management and some delegation to complete it within the allotted time. This exercise is ‘hands on’ in the sense that an actual model of the home is constructed, using Lego bricks. Figure 8.8 presents part of the instructions for ‘Home’.
Group Exercise ‘Home’ Your group’s task is to design a residential home for the elderly. Your aim is to design a home that will at least cover its costs, but better still one that will make 10% profit. You have a schematic model to aid your planning, and to represent its final shape to the assessors. The home is being built on the SPINE system, in which all services run under the corridors. The ‘spine’ of corridors has already been laid out, and cannot be altered. The home is single storey. The ‘spine’ is represented by white on the model. You have freedom to plan the rest of the layout, within certain restrictions, listed below. Note that the components of the model have raised lugs on one surface, each of which counts as one unit. •
• • •
•
Residents rooms. You have a choice of single person rooms double rooms and four person rooms. You may use any mixture of these. Single rooms are black, double rooms are red and four person rooms are grey. Every room must have an outside window. The SPINE system means that residents’ rooms may not extend more than two units away from the corridor, i.e. that the oblong 2 and 4 person rooms must have their longer side parallel to the corridor. Lavatories. For each five residents, one lavatory must be provided. Lavatories are yellow; each unit represents one lavatory. Lavatories must open off the corridor; they do not need outside walls, and they may be more than one unit deep. But see also Poor Design Quality. Dining room. A dining room must be provided, large enough for the number of intended residents. For every two residents, one unit of dining room area must be provided. Dining room units are blue. See also Poor Design Quality. Sitting room. A sitting room must also be provided, large enough for the number of intended residents. For every two residents, one unit of sitting room area must be provided. Sitting room units are white. The sitting room may incorporate corridor area, but only if no access to space beyond the sitting room is required. See also Poor Design Quality. The SPINE system means the dining and sitting rooms may not extend more than six units from the corridor. Kitchen. A kitchen must be provided, of a fixed size and shape. It must be adjacent to the dining room, must open off the corridor, and must have one outside wall to allow access. The kitchen unit is transparent.
Poor Design Quality Your design will have to be submitted to the local authority planning department, and to the DHSS. They tend to object to certain features in residential home plans; while they cannot actually stop you following your plans, they can cause you delay, and consequently extra expense, ranging from 1% of the total construction cost to 20%. Features they have been known to object to in the past include: residents being too far from a lavatory, rooms with limited or very unattractive views, ‘unsightly’ projections beyond the building line, and sitting or dining rooms that are ‘strange shapes’. [Details of costs and income follow.] Your Task You have 2 hours to plan your residential home, and estimate its costs and income. At the end of this time you should have a completed model, and a completed financial summary sheet.
Figure 8.8 Part of the instructions for group exercise ‘Home’
0470861630_09_cha08.fm Page 182 Thursday, December 2, 2004 8:33 PM
182
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
In-Tray This exercise requires candidates to deal with a set of 12 items, which present a mixture of budgetary, planning and staff management problems.
Graduate and Managerial Assessment—Numerical This assesses the ability to extract conclusions from numerical information in a variety of forms. It is not a test of computation, rather an assessment of whether the candidate can work out what computation to make.
Role Play Each candidate plays the role of a fairly junior hospital manager who is trying to persuade a very senior, and not very amiable, consultant to release four hospital beds for a new development. The person playing the role of the consultant is briefed to make concessions if the candidate makes a good case, and remains calm and reasonable. This assesses interpersonal skills.
Presentation Each candidate makes a presentation to the assessors and the rest of the candidates, lasting 10 minutes, on a topic chosen from a list generated by the assessors. This assesses ability to deliver arguments, gauge audience reaction and keep to time.
Interview The interview, with two assessors, focuses on the ability to think ahead, and on evidence of energy.
Matrix Figure 8.9 shows the competence × exercise matrix of this AC. Note that some competences are assessed by four exercises, where others are assessed by only the required minimum of two. Influence is assessed by four exercises, partly because there are three facets—one-to-one, group discussion and audience—that merit separate assessment. More cells in the matrix could be ‘filled’, i.e. assessed; for example, the first group exercise could be assessed for influence or awareness, but it is sometimes better not to overload the assessors by asking them to make every possible assessment.
0470861630_09_cha08.fm Page 183 Thursday, December 2, 2004 8:33 PM
ASSESSMENT AND DEVELOPMENT CENTRES
UI
UF
SV
183
IS
A
E
TM
Group exercise ‘Lost in the desert’ Group exercise ‘Future’ Group exercise ‘Home’ In-tray GMA— Numerical Role play Presentation Interview
Figure 8.9 Matrix of NHS hospital manager assessment centre; ×, Not assessed Notes: UI, Understanding information; UF, understanding figures; SV, strategic vision; IS, influencing skills; A, awareness; E, energy; TM, time management.
Implementation The exercises are shown to hospital managers and HR staff, who comment on wording and time limits. A trial run with existing NHS managers was carried out, which indicated that the ‘Home’ exercise was too difficult, so the time limit was extended.
0470861630_09_cha08.fm Page 184 Thursday, December 2, 2004 8:33 PM
0470861630_10_cha09.fm Page 185 Thursday, December 2, 2004 8:33 PM
CHAPTER 9
The Interview
OVERVIEW • Interviews vary widely in what they seek to assess, and how they are done. • Conventional unstructured interviews have poor reliability; interviewers do not agree all that well. • Conventional unstructured interviews have poor validity; they do not predict work performance accurately. • Interviewees try to present themselves in a good light, and may sometimes even fail to tell the truth about themselves. • Interviews can be improved by training interviewers, using ratings, taking notes and using panels. • Interviewers do not always reach decisions very efficiently or rationally. • Interviewers may be biased by gender, ethnicity, age, appearance, weight, accent or liking. • Unstructured interviews have been the subject of many fair employment claims, many of which have been successful. • A case study showing some typical mistakes interviewers make.
INTRODUCTION Interviews have been used to assess people for a long time. In fact, it is hard to think of a situation in selection and assessment where the interview is not used at some stage or other. The interview plays a very important part in any psychological assessment. Interviews vary very widely. They can be as short as 3 minutes, or as long as 2 hours. There may be one interviewer, or several. If there are several interviewers, they may interview together, as a panel or board, or separately. The public sector in the UK favours panel interviews, with five, ten or even twenty interviewers. The Cranfield
0470861630_10_cha09.fm Page 186 Thursday, December 2, 2004 8:33 PM
186
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
ey rk Tu
ed
en
ay w or
Sw
d an N
k
nl Fi
m ar
D en
nd
s
y
rla
an
he N et
l
n
m
ai
er G
Sp
ga rtu
an
nd
ce Po
Fr
la Ire
Th
eU
K
100 90 80 70 60 50 40 30 20 10 0
Figure 9.1 The percentage of employers using the interview to select, in 12 European countries Source:
Data from Dany and Torchy (1994).
Price Waterhouse survey of 12 countries (Dany and Torchy, 1994) finds that 70–100% of European employers interview applicants, the exception being Turkey, where only 64% of employers interview (Figure 9.1). Interviews are equally standard in North America. In campus recruitment, applicants often go through a series of interviews. Some organisations use brief telephone interviews to screen job applicants, and select those for closer assessment. Others include interviews as part of an assessment centre taking as long as two days. In the past interviews were often conducted very casually. Interviewers had no job descriptions or person specifications, but were looking for ‘the right sort of person’ or ‘someone who will fit in here’. They had no prepared questions, took no notes and made no ratings of applicants. Some employers may still do interviews this way, but most have been forced to ask themselves if their selection methods are reliable, valid and fair. The traditional interview was often none of these, and the need to select efficient staff and to avoid unfair employment claims has caused many employers to do their interviewing more systematically. Like any assessment method, the interview should be linked to the organisation’s competence framework. A recent US survey (Huffcutt et al., 2001) looks at what employers are trying to assess in interviews, with slightly surprising results, shown in Table 9.1. It is surprising that interviews are so often used to assess personality and mental ability, because tests of both are widely available, possibly more accurate, and certainly more economical to use. On the other hand, the interview may be particularly well suited to assess social skills, since it is itself a social encounter.
Recommendation: Be clear what the interview is being used to assess, and ensure that the interviewers are clear.
0470861630_10_cha09.fm Page 187 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
187
Table 9.1 American survey of what interviews are used to assess What is assessed Personality Applied social skills Mental ability Knowledge and skills Interests and preferences Organisational fit Physical attributes Source:
Percentage of interviews 35 28 16 10 4 3 4
Data from Huffcutt et al. (2001).
RELIABILITY AND VALIDITY In order to check whether the interview is serving the organisation well, we need to explore the two vital concepts introduced earlier in the context of testing: namely, reliability and validity. The interview is a test or assessment, so we can ask how reliable and valid it is, and compute correlations between interviewer ratings and work performance to find out.
Interviewer Reliability Reliability is usually measured by the correlation between two sets of measures. If two interviewers each rate 50 applicants, the correlation between their ratings estimates inter-rater reliability, also known as inter-observer or inter-judge reliability.
Reliability A recent analysis of 160 separate interview studies reaches some reasonably reassuring conclusions (Conway et al., 1995): • interviewers agree fairly well (correlation of 0.77) when they see the same interview; • interviewers still agree, but rather less well (correlation of 0.53) when they see different interviews with the same applicant. The reason for the difference is probably that applicants behave differently at different interviews. If this is true, the lower estimate (0.53) is probably the better estimate of interview reliability in practice, because inconsistency of applicant behaviour is an inherent limitation of the interview. Interviews are more reliable if based on a competence analysis, and if the interviewers are trained. Table 9.2 lists sources of unreliability in the interview, and possible ways to improve reliability.
0470861630_10_cha09.fm Page 188 Thursday, December 2, 2004 8:33 PM
188
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Table 9.2 Ways to improve the interview’s reliability Recommendation
Comment
1. Structure the interview questions around the job analysis
Resilience: ‘How do you handle rejection by prospects or customers?’ Time management: ‘How do you respond to deadlines?’ Prepare a list of questions
2. Ask every applicant the same questions 3. Make every interview the same length 4. Use more than one interviewer 5. Use the same interviewers every time 6. Use a rating system for interviewees’ answers 7. Train interviewers 8. Interview panel take brief open-ended notes 9. Give each applicant two or more interviews
e.g. HR manager and line manager If possible Five-point rating usual, e.g. very good, good, acceptable, poor, very poor Train interviewers using role-play Q8—A seemed indecisive and unsure Q3—A’s response very positive and confident If possible, to allow for inconsistency in what applicant tells each interviewer
Interviewers should always be trained. Apart from any other benefits, using untrained interviewers makes it very difficult to defend the assessment if challenged. Interview training often uses role plays, and may include practice in rating pre-scripted, model answers. Following training, new interviewers can serve an induction period, where they act as observers alongside experienced interviewers. Recommendation: Always ensure interviewers are trained.
The interview panel should agree their decision criteria beforehand, but should make an allowance for error. Suppose they are dealing with 10 five-point ratings, giving a maximum score of 50, and set a cut-off score of 35. They should view this cut-off as plus or minus 5, especially when comparing applicants, and not make the mistake of thinking that 38 is definitely better than 36.
Recommendation: Always ensure interviewers have awareness of error of measurement.
Validity The concept of validity crops up continually in all processes of psychological assessment. In the context of the selection interview, validity means:
0470861630_10_cha09.fm Page 189 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
189
• Is the person in front of the interview panel the person who has actually applied for the job? • Does the interview help us make decisions about whether this candidate will be able to do the job? Clearly, the second question depends very much on the job in question. An interview for the job of HGV driver will not tell us much about how well the candidate can drive. The interview is more likely to tell us about the candidate’s social skills because the interview is a social interaction. If social skills are needed in the job, as in teaching, then more importance may be put on structure and length of interview. For an HGV driver an interview will be structured but probably much shorter than that for a teacher, and will focus on checking facts with respect to experience and qualifications. In practice, the interview is often used to make a simple decision, whether to hire or promote, or not. Such key decisions will be taken many times by organisations. The interview plays such an important part that it is most important to ‘get it right’. We tend to talk about the ‘validity of the interview’ when perhaps we should really talk about the validity of the interview for a particular purpose, e.g. assessing sales potential. The interview is used to assess a wide variety of skills, abilities, personality, attitudes etc. The very popularity of the interview may derive from its versatility. The interview has this feature in common with some other selection methods, such as assessment centres or biodata. A mental ability test, on the other hand, can only assess mental ability. In practice selection interviews are usually used to make a simple decision—to hire or not—and are often validated against a global assessment of how well the person does the job. Also, the interviewer may be asked to make 10 ratings of diverse characteristics, but it is frequently the case that all 10 ratings will be highly correlated. In this context, perhaps we can talk about the ‘validity of the interview’. There is no shortage of research on the interview. The first study of selecting sales staff by interview appeared in 1915 (Scott, 1915); the first summarising review appeared in 1972 (Dunnette, 1972). All of these studies pointed to one clear conclusion: the conventional interview is not a good way of selecting applicants. The first summarising review of interview validity came from the American petroleum industry, and found the selection interview had very low validity. A second broader review in 1984, of US public and private sector interviewing, also generated low average validity (Hunter and Hunter, 1984). A subsequent larger scale review (Wiesner and Cronshaw, 1988) includes research done in Germany, France and Israel as well as the USA (Figure 9.2), and reaches some interesting conclusions: • The validity of all interviews overall is quite high: the interview may not be quite such a poor predictor as many psychologists have supposed. However the ‘all interviews’ validity in Figure 9.2 is probably an overestimate of how well the typical interview does, because it includes structured interview systems. As we shall see in the next chapter, structured interviews are so unlike the traditional, or unstructured, interview as to be a different form of assessment altogether. • Unstructured interviews, the type of interview generally practised in the UK, are less accurate.
0470861630_10_cha09.fm Page 190 Thursday, December 2, 2004 8:33 PM
190
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
All interviews All unstructured interviews One-to-one unstructured interviews Board/panel unstructured interviews All structured interviews One-to-one structured interviews Board structured interviews 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Figure 9.2 Interview validity, overall and distinguishing structured from unstructured and one-to-one from panel or board Source:
Data from Wiesner and Cronshaw (1988).
• The review also distinguishes one-to-one and board or panel interviews, and finds that one-to-one unstructured interviews have lower validity than board interviews. • Unstructured one-to-one interviews are very inaccurate. Recommendation: Do not use one-to-one unstructured interviews.
Validity for Different Characteristics We have a few insights into what interviews assess more successfully, and where they do less well: • Interviews that try to assess personality are less successful than interviews that assess training, experience and interests. • Interviews are often used to assess organisational fit, ‘chemistry’ or ‘the right type’. Sometimes this may just be a code word for the interviewer’s prejudices or reluctance to explain him/herself, but it could refer to legitimate organisationspecific requirements which the interview could be used to assess. Interviewers from the same organisation agree about candidates’ fit, showing the concept is not entirely idiosyncratic. However, fit cannot be related to objective data such as grade point average, but is related to appearance, which suggests an irrational element. • Interviews can assess values: whether the applicant has the same ideas about what matters in work as the organisation. However, interviewers are not very accurate judges of applicants’ values, which suggests it might be better to assess applicants’ values more directly by paper and pencil tests. • Interviews assess organisational citizenship fairly well. (Organisational citizenship means volunteering to do things not in the job description, helping others,
0470861630_10_cha09.fm Page 191 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
191
following rules willingly and publicly supporting the organisation—all highly desirable behaviours in employees.)
Recommendation: Be careful when using interviews to assess ‘fit’ or ‘values’.
Incremental Validity The interview may not be very accurate in itself, but can it improve the prediction made by other methods, perhaps by covering aspects of work performance that other selection methods fail to cover? Conventional interviews are surprisingly highly correlated with mental ability tests, which implies they will provide little or no incremental validity over mental ability tests. We have empirical confirmation that an unstructured interview adds little to tests of conscientiousness and mental ability. What happens if the interviewer knows the test scores while giving the interview? This is sometimes done in the hope of using the interview to confirm or clarify the test data. In fact, knowledge of test data seems to result in much lower interview validity; perhaps knowing the test data means the interviewer does not try.
REASONS FOR POOR VALIDITY Why is the conventional unstructured interview apparently such a poor predictor of work performance? Validity subsumes reliability and both are key concepts in ‘getting it right’ in interviewing. Much can go wrong or influence the interview. Table 9.3 outlines some of the reasons for poor interview validity.
Interviewee’s Impression Management Interviewees generally want the job, so they will try hard to manage the impression they create on the interviewer(s). This is a perfectly natural process: we all try to manage the impression we give to others. Interviewers themselves often try to manage the impression of the organisation during an interview, to make it an attractive one for the interviewee. Applicants are expected to present themselves well at interview; one often hears interviewers complaining that an applicant ‘didn’t make an effort’. Research confirms that people who seek to ingratiate and self-promote in the interview do succeed in getting better ratings. This may be a source of error, but not necessarily. Ingratiators and self-promoters may be better performers in the workplace. The type of explanation people offer for past failures affects the impression they create. Admitting the failure is your fault—‘I didn’t revise hard enough’—is better received than blaming other people.
0470861630_10_cha09.fm Page 192 Thursday, December 2, 2004 8:33 PM
192
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE Table 9.3 Reasons for poor validity of interviews, and possible solutions Reason In interviewee behaviour Impression management Coaching Lying In analysing the data Criterion reliability Restriction of range In interviewer behaviour Interviewers differ
Interviewer motivation Rating error Interviewer inconsistency Interviewer bias Interviewer thought processes
Solution Cross-check in interview Cross-check by other assessment Cross-check in interview Provide your own coaching Cross-check in interview Try to detect Correct statistically Find better measure of work performance Correct statistically Train Select the selectors Provide questions or script (i.e. structure the interview) Provide breaks Train Analyse Improve rating system Provide questions or script (i.e. structure the interview) Train Monitor Use more than one interviewer Train Monitor Use more than one interviewer
Interviewee Coaching Many applicants receive coaching and practice in interviewing technique. Popular management literature on being interviewed is available in most bookshops and high street stationers. Coaching is routinely offered to students at British universities. American research confirms that people who have been coached do better in subsequent interviews. Alternatively, the employer could offer some interview preparation and practice to all applicants. This would help the applicants, and would also ensure they had all been told the same, and all had the same help, before the real interview.
Interviewee Lying One step on from presenting oneself well is outright lying. This unethical practice is more likely to occur in high-stakes situations. It is difficult to detect and is the main
0470861630_10_cha09.fm Page 193 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
193
reason for having a range of measures in order to gain confirmation of important facts such as qualifications. Paul Ekman’s research studied five sets of experts who ought to be good at detecting lies—Secret Service, CIA, FBI, National Security Agency and Drug Enforcement Agency (Ekman and O’Sullivan, 1991). Only one of the five groups did better than chance.
Recommendation: Always seek confirmation of what people tell you at interview.
Interviewee Inconsistency As we have noted, the interviewee adds to the uncertainty of the interview process by telling different interviewers different parts of his/her story.
Criterion Reliability Validity coefficients are low because the interview is trying to predict the unpredictable. The interviewer’s judgement is compared with a criterion of good work performance. Supervisor ratings, the most commonly used criterion, have limited reliability (0.60 at best); one supervisor agrees only moderately well with another supervisor. Validity coefficients can be corrected for this, which increases validity.
Range Restriction As only the top performers in the interview are employed, any subsequent correlation between interview and job performance will be reduced. This is illustrated graphically in Figure 3.4. Range is almost always restricted in selection research because few organisations can afford to employ people judged unsuitable simply to allow psychologists to calculate better estimates of validity. Validity coefficients can be corrected for restricted range, which further increases validity.
Interviewers Differ Most research on interview validity pools data from a number of interviewers, thereby mixing good, bad and indifferent interviewers. Perhaps interview validity would be better estimated from how well good interviewers do.
Interviewer Motivation Anyone who has spent a day interviewing knows how one’s attention can start to wander by the late afternoon. People watching videotaped interviews can make
0470861630_10_cha09.fm Page 194 Thursday, December 2, 2004 8:33 PM
194
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
more accurate ratings if they try; being told they would later have to explain their ratings makes them pay more attention to the interview, which results in more accurate ratings.
Interviewer Rating Error People use numerical rating scales in different ways. Suppose two interviewers listen to the same account of ‘your greatest achievement’ and both think it ‘above average’. So far they agree; but one interview thinks it is worth a ‘five’ on the seven-point scale, while the other thinks it is a ‘six’. This disagreement will reduce the correlation when we calculate the validity of the interview. Training can help interviewers agree better what ‘above average’ looks like. More sophisticated rating systems can help too; behaviourally anchored rating scales (BARS) are described in Chapter 14.
Interviewer Inconsistency Being inconsistent, asking different applicants different questions, will tend to make interviews less accurate.
The Interviewer’s Thought Processes Ideally, the interviewer will listen carefully to everything the candidate says, and reach a wise decision based on all the information available. Research has documented a number of ways in which interviewers fall short of this ideal.
Interviewers Make their Minds Up before the Interview Interviewers usually have some information about candidates before the interview starts, from CV, application form etc. This plays a large role in selection decisions; it accounts for a third of the decision.
Interviewers Make Up their Minds Quickly Interviewers often reach decisions early in the interview. A frequently cited study in Canada concluded that interviewers make up their minds after only 4 minutes of a 15-minute interview (Springbett, 1958). An interviewer who makes his/her mind up so soon does not seem very likely to have made a good decision.
0470861630_10_cha09.fm Page 195 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
195
The Interviewer Forms a First Impression Male interviewers react against scent or aftershave, regardless of the sex of the applicant; female interviewers favour it, also regardless of the sex of the applicant. Female applicants for management positions create the best impression by being conventionally but not severely dressed.
The Interviewer Looks for Reasons to Reject One study showed that just one bad rating was sufficient to reject 90% of candidates (Springbett, 1958). Looking for reasons to reject is a rational strategy if the organisation has plenty of good applicants.
The Interviewer Relies on an Implicit Personality Theory One interviewer hired a salesperson who proved a disaster, and for ever after would not employ anyone who had ever sold knitting machines. Why not? Because the disastrous salesperson had previously sold knitting machines. The interviewer reasoned: • people who sell knitting machines are poor salespersons; • this applicant formerly sold knitting machines; • therefore this applicant will be a poor salesperson. The interviewer’s reasoning was obviously faulty: someone must be good at selling knitting machines. Recommendation: Ensure interviewers are warned of common errors they can fall into. Ensure every applicant is seen by more than one interviewer.
Interviewer Bias Bias in the interview can take two broad forms: 1 The interviewer may discriminate, more or less openly. The interviewer may: • • • •
mark down female applicants because they are female; think women are less suitable for the job; ask women questions that men are not asked; think suitable candidates have characteristics that men are more likely than women to have (e.g. an interest in football).
0470861630_10_cha09.fm Page 196 Thursday, December 2, 2004 8:33 PM
196
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
2 The interview may create adverse impact, in the absence of any intended or unintended bias by the interviewer; when the year’s intake is complete, it turns out that, for whatever reason, fewer women than men are successful. It can be difficult to distinguish deliberate bias from adverse impact, and it may not be worth trying, in the sense that both create legal problems. The same issues arise with ethnic minority and disabled applicants. In fact, the interview provides an ideal opportunity for the exercise of whatever bias(ses) the interviewer has, because the interviewer cannot help knowing every applicant’s gender, ethnicity, disability, age, social background, physical attractiveness etc., and because the interviewer often is not required to explain his/her thought processes or justify his/her decisions (whereas selectors can use psychological tests or biographical methods, without seeing the applicants, or knowing their gender, ethnicity etc.).
Are Interviewers Biased Against Women? The most recent review (Huffcutt et al., 2001) finds unstructured interviews do create some adverse impact on females (whereas structured interviews do not). The research reviewed is nearly all North American, so we cannot safely assume similar results will be found in other countries, given how widely attitudes to gender vary. We have no research on gender bias in interviewing in Europe.
Are Interviewers Biased by Race? The same 2001 review shows that unstructured interviews do create some adverse impact on non-white Americans, especially interviews that assess intellectual ability and experience. Interviews may also show own-race bias, where whites favour whites, blacks favour blacks, Hispanic Americans favour other Hispanic Americans etc. No research on this important issue has been reported for Europe.
Are Interviewers Biased Against Older Applicants? A number of studies show younger raters rate older applicants less favourably, if not provided with job-relevant information.
Recommendation: Collect information on gender, ethnicity, age and success at interview, both overall, and by interviewer, if numbers are large enough. Ensure every applicant is seen by more than one interviewer.
0470861630_10_cha09.fm Page 197 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
197
Are Interviewers Biased by Accent? George Bernard Shaw once remarked that no Englishman can open his mouth without making other Englishmen despise him. The only research on accent comes from Canada and found that candidates with foreign accents got less favourable ratings (Kalin and Rayko, 1978).
Are Interviewers Biased by Appearance? Most people can agree whether someone is conventionally ‘good looking’ or not. A number of studies have shown that this has quite a strong effect on interview ratings. The most recent found managers prefer highly attractive candidates, and concludes that ‘less attractive candidates, especially women, would have little chance of securing the job’ (Marlowe et al., 1996).
Are Interviewers Biased by Weight? Interviewers are biased against overweight applicants. One study used actors, whose apparent body weight in the overweight condition was increased by 20% by make-up and padding. The bias was stronger where body shape and size were important to the interviewer’s own self-image. We also have extensive evidence of consistent negative stereotypes of overweight people as lacking in self-discipline, lazy, having emotional problems, or less able to get on with others. Overweight people are discriminated against at ‘virtually every stage of the employment cycle, including selection’ (Roehling, 1999).
Are Interviewers Biased by Liking? The more the interviewer likes the candidate, the more likely the interviewer is to make an offer. Liking may of course be based on job-related competence, but may equally well arise from irrational biases.
Recommendation: Ensure every applicant is seen by more than one interviewer.
IMPROVING THE INTERVIEW The traditional interview turns out to be very inaccurate, so HR departments should be looking for ways of improving it. Quite a few possibilities exist.
0470861630_10_cha09.fm Page 198 Thursday, December 2, 2004 8:33 PM
198
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Select Interviewers Interviewing is shared between all senior staff in many organisations. Research confirms that some interviewers are better than others. Edwin Ghiselli (1966) found one interviewer—himself—whose accuracy in selecting stockbrokers over 17 years yielded a very high personal validity coefficient. However, we must sound two notes of caution: • An organisation that wants to select good interviewers will need a large sample of interviews to base its decisions on. Estimates based on a couple of dozen interviews are unlikely to be accurate enough to conclude that Smith is a definitely a good interviewer, and Jones definitely poor. • Telling some managers they cannot give interviews because they are not very good at it could create problems in many organisations.
Use More than One Interviewer Research has compared one-to-one interviews with panel or board interviews; two or more interviewers clearly get far better results than just one. Perhaps two or more have more time to make notes, listen to what the candidate is saying, plan their next question etc. Perhaps two or more interviewers are less likely to be swayed by idiosyncratic biases. Many employers insist on panel interviews, and equal opportunities agencies also recommend their use.
Use the Same Interviewers Throughout Sharing interviewing means different applicants, even for the same job, are interviewed by different interviewers. Common sense suggests using the same interviewer(s) throughout should give better results; research confirms that it significantly improves interview validity.
Train Interviewers Training makes interviewing more reliable, and significantly improves interview validity. Training changes interviewer behaviour; improvements include asking more open-ended questions, more ‘performance differentiating’ questions, and being less likely to stray from the point. Using untrained interviewers will make it much more difficult to defend selection methods if they are challenged.
Take Notes Some interviewers refrain from taking notes on the argument that it distracts the candidate. On the other hand, an increasing number of organisations require
0470861630_10_cha09.fm Page 199 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
199
interviewers to make notes, which the organisation keeps in case of subsequent dispute. Comparing 55 studies where interviewers did not take notes with 68 where they did showed that taking notes significantly improves interview validity (Huffcutt and Woehr, 1999).
Make Ratings Interviews that use descriptively anchored rating scales achieved higher average validity than ones that did not. Chapter 14 describes anchored rating systems.
Recommendation: Ensure interviewers have a rating scheme, and use it.
WHAT THE INTERVIEW ASSESSES Recent research has compared interview ratings with other assessments, usually psychological tests. This gives some indication of what the interview is actually assessing (which is not necessarily what it is intended to assess). Figure 9.3 summarises some interesting results: • • • •
The interview turns out to make a moderately good disguised mental ability test. The less structured the interview, the more it tends to measure mental ability. The lower level the job, the more the interview tends to measure mental ability. The more the interview tends to measure mental ability, the more it also predicts job performance, i.e. the more valid it is.
It is quite surprising that the conventional interview turns out to be assessing mental ability to such an extent, because only 16% of interviews are intended to assess mental Mental ability Grade point average Social skill (Low) Neuroticism Extraversion Openness Agreeableness Conscientiousness 0
0.1
Figure 9.3 What the interview assesses Source:
Data from Salgado and Moscoso (2002).
0.2
0.3
0.4
0.5
0470861630_10_cha09.fm Page 200 Thursday, December 2, 2004 8:33 PM
200
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
ability. Nor does it seem likely that most applicants, interviewers, HR managers or lay people see interviews as mental ability tests. Interview ratings do not, however, reflect applicants’ educational level to any extent. • More extravert, open, agreeable and conscientious applicants get better interview ratings, as do less anxious applicants. Interview ratings are based partly on applicants’ personality, whether or not the interview is intended to assess these attributes. • More socially skilled applicants get better interview ratings, which is hardly surprising since the interview is a social encounter.
LAW AND FAIRNESS Interviews can get in trouble with the law in two quite different ways: • Interviews can create adverse impact: when the year’s selection decisions are reviewed, it may emerge that more men than women are made job offers after being interviewed (or more whites than minorities etc). • Interviews can show direct discrimination, when the interviewer asks questions that indicate women are being treated differently to men (or that minority or disabled applicants are being treated differently). It is very easy for inexperienced or ill-informed interviewers to ask unacceptable questions in a conventional interview. It is essential, therefore, for all interviewers to be trained, and for the training to make clear what sort of questions should not be asked. A structured interview, which has a script to follow, helps keep the interviewer out of trouble. In the UK, the Equal Opportunities Commission’s Code of Practice says ‘questions posed during interviews [should] relate only to the requirements of the job. Where it is necessary to discuss personal circumstances and their effect upon ability to do the job, this should be done in a neutral manner, equally applicable to all applicants’. These days most interviewers know that you should not ask questions that imply that the job is better suited to male applicants. The UK Disability Discrimination Act means interviewers should avoid questions about health or disability, unless they are very definitely job related. The interviewer can ask the applicant if he/she is able to perform tasks that are essential to the job, e.g. picking up 30 kg boxes. The list of questions not to ask tends to be longer in the USA. For example, the Washington State Human Relations Commission lists these topics as objectionable: • • • •
arrests citizenship spouse’s salary, children, child care arrangements, dependants over-general information, e.g. ‘do you have any handicaps?’
0470861630_10_cha09.fm Page 201 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
• • • •
201
marital status military discharge information in relation to pregnancy whether applicant owns or rents home.
Recommendation: Have a list of questions that should not be asked in interview, and ensure that every interviewer is familiar with it.
A recent survey (Terpstra et al., 1999) found that unstructured interviews are the most frequent source of dispute in USA. The survey estimated how many complaints we would expect for any selection assessment, given its frequency of use in the USA. Unstructured interviews are complained about twice as often as would be expected (whereas structured interviews are only complained about half as often as would be expected). The complaints seem to have had some basis, because 40% of employers lost their case. Another recent analysis (Williamson et al., 1997) of 130 American court cases identified features of the interview that help employers defend themselves against claims of unfairness, and found two main themes: • Structure: the use of standard sets of questions, limiting interviewer discretion, and ensuring all interviews are the same. • Objectivity and job relatedness: the use of objective, specific, behavioural criteria, as opposed to vague, global, subjective criteria; also, an interviewer who is trained and who is familiar with the job’s requirements. Interviews that possess these features help employers avoid expensive and damaging court cases. The next chapter describes structured interviewing in greater detail.
CASE STUDY Below is an interview for the post of HR manager in a British university. Make notes on any part of this interview you would do differently, and why. What mistakes do you think the interviewer is making? If you want to do this exercise ‘blind’, cover up the right-hand side of the page, which gives our ‘model answers’. Q1. Why do you think you are suitable 1. Very abrupt start—no establishing for this job? rapport 2. No explanation of interview form or place in the selection process A1. I see myself as very well qualified. Q2. It’s always nice to get fresh blood 1. Does not follow up previous answer from the colonies. 2. Patronising comment
0470861630_10_cha09.fm Page 202 Thursday, December 2, 2004 8:33 PM
202
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
A2. I’m not Australian, I’ve only been out there two years. Q3. And now you want to get back to England. What do think about the economy? A3. Creates some problems as usual. Q4. Tell me about your present job. A4. I was HR manager for Woollomolloo City Council.
1. Does not ask why candidate is leaving Australia. 2. Ambiguous—which economy? 3. What relevance to HR? Vacuous answer, needs challenge. Has not read application carefully enough and not spotted candidate left there 3 months ago.
Q5. How many employees does the council have? A5. Sixty-four. Q6. What main problems did you encounter in that job? A6. I suppose a general lack of direction in many of the employees. Q7. What do you mean by that? Can you give me an example? A7. Oh, people who don’t do much until lunch, then go over the bar and come back and fall asleep. Q8. What did you do about that? A8. Nothing much I could do, the Does not follow up this comment. whole outfit was permeated with inefficiency. Q9. What do you think of the aborigine problem in Australia? A9. We didn’t have many abbos in Woollomolloo.
Leading question—assumes aborigines are a problem. 1. Has not answered the question. 2. ‘Abbo’ is an offensive term.
Q10. What was your single biggest achievement at Woollomolloo? A10. I helped get a big contract with Melbourne City. Q11. What was the contract for? A11. School library cataloguing Q12. How big was it? A12. Very big, biggest thing we’d ever done.
Very vague answer.
Q13. I mean what sums were involved?
Good probe.
0470861630_10_cha09.fm Page 203 Thursday, December 2, 2004 8:33 PM
THE INTERVIEW
203
A13. Roughly 56K. Q14. Fifty-six thousand Australian dollars? You said you helped. Describe your role. A14. I drew up the service contract.
Good clarification.
Q15. Who else was involved? What did they do? A15. There were 12 in the team. Some Probing questions establish that candidate sold the idea to Melbourne; the played a subordinate role in a fairly small people from education section project. liaised with schools. Q16. Tell me about your manager at Woollomolloo. How did you get along with him? A16. Her actually. Not very well. Deserved rebuke! Q17. Why not? A17. Meeting deadlines has never been one of my strong points.
Misses chance to apologise for mistake.
Q18. Woollomolloo certainly sounds a place to get away from. Tell me about your outside interests. A18. I’m very interested in art, drama and classical music.
Misses opportunity to ask about deadlines.
Q19. What do you think of the cricket results? They’ve never been so bad. I blame Jefferson. A19. [None]
Completely irrelevant.
Q20. Are you on your own, or do you live with someone? A20. Why do you ask? Q21. How many days were you off work sick last year? A21. Ten actually. Q22. Why? A22. Knee injury from rugger.
Candidate is not interested in cricket! Intrusive question, possibly sexist. Why indeed? Will candidate give an honest answer? How can you check?
Suppose the answer was depression: could contravene disability discrimination laws.
Q23. If you were giving a presentation Fairly unlikely scenario—what is the to a public meeting, and someone point of the question? got up and started a personal attack on you, what would you do?
0470861630_10_cha09.fm Page 204 Thursday, December 2, 2004 8:33 PM
204
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
A23. I don’t give presentations to the public. Well, thank you very much, Ms Jones. We’ll let you know.
Question proves pointless! Does not ask if candidate has any questions.
Q24. Is it true that your HQ is relocating Candidate asks them anyway. to Manchester next year? A24. Er, yes probably. Q25. I’d like to ask about training entitlement, your ad said something about six days. A25. Well, that’s the theory, but in the The interview is supposed to sell the real world you’ll be lucky to get two organisation, not knock it.
0470861630_11_cha10.fm Page 205 Thursday, December 2, 2004 8:33 PM
CHAPTER 10
Structured Interviews
OVERVIEW • Structured interviews control interview format and questions, and provide detailed rating systems. • Structured interviews are based on competence analysis. • Structured interviews are highly reliable and achieve good validity. • Research suggests that question format in structured interviews is not critical. • Structured interviews have been the subject of fewer fair employment claims, far fewer of which proved successful. • Structured interviews may be more like orally administered tests.
INTRODUCTION A structured interview can be very different from the traditional interview. For example, the candidate for a technical sales post sits before a panel of interviewers. The chair reads out the first question: ABC are interested in buying 100 units of your product at a price which is the very lowest your organisation can offer. At your third meeting ABC say they wish to confirm the offer but at a reduced price, which will barely cover your costs. What reply would you give them?
The applicant takes a few minutes to compose a reply, then reads it out. The panel make copious notes, but say nothing, until the chair breaks the silence by reading out the second question on the list.
Deficiencies of the Traditional Interview The interview is a test or assessment, so we can ask the usual questions—about reliability, validity and fairness—that we ask about any assessment method. As we saw in Chapter 9, the answers for the traditional interview are not all that promising.
0470861630_11_cha10.fm Page 206 Thursday, December 2, 2004 8:33 PM
206
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• interviewers agree with each other fairly poorly about applicants; • the validity of unstructured interviews is low, especially when done one to one; • interviewers cannot be unaware of applicants’ gender, age, or ethnicity. Unsurprisingly, we find that unstructured interviews are a frequent source of complaint in court cases about unfair selection.
WHY THE TRADITIONAL INTERVIEW DOES NOT WORK Interviewee Inconsistency The traditional interview is unreliable because it gives the applicant considerable freedom, to tell different interviewers different parts of his/her story (applicants are not necessarily lying, just covering different ground), so interviewers draw different conclusions. Structured interviews ask every applicant the same questions, which will secure much better reliability.
Interviewee Impression Management The unstructured interview gives the applicant ample scope to talk about his/her good points and steer away from weaknesses and past failures. Structured interviews have a more definite agenda, which is based on a detailed analysis of the job’s requirements, so applicants will find it more difficult to gloss over their shortcomings.
Interviewers Differ Some interviewers are more successful than others. However, selecting the selectors— while a logical solution—would not be very popular in many organisations. Who wants to tell the MD he/she cannot interview applicants because he/she always gets it wrong? It might be easier to change the way interviews are done, so the MD has more chance of getting it right. Recent American research suggests structured interviewing does iron out interviewer differences, and makes all interviewers equally effective (Posthuma et al., 2002).
Interviewer Bias Anecdote and extensive research describe a whole series of biases that can affect the interviewer’s judgement. Some are widely shared, and may be based on visible characteristics like race, gender, age or social background. A survey of UK recruiters in 2003 found around 20% did not like people with Birmingham or Merseyside accents, especially for professional jobs (HR Gateway Editorial, 2004). Others are more idiosyncratic: ‘the sort of man who can’t pass a mirror without combing his
0470861630_11_cha10.fm Page 207 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
207
hair’, people who split infinitives, ‘train spotters’. The more useful, objective information the interviewer has, the less likely such biases are to affect his/her judgement, so an interview scripted to include only relevant questions should give the interviewer enough hard facts to overcome his/her biases.
STRUCTURED INTERVIEWING Structured interviewing has developed rapidly since it started in North America, and is now widely used in the UK, in local government, the financial sector, for sales, manufacturing, and the hotel industry. A survey in the UK in 1999 found 83% of employers reported using some degree of structure in interviewing, especially ‘behavioural’ interviewing, also known as competency-based interviewing (Barclay, 2001). Structured interview systems structure every part of the interview: • Interviewers’ questions are structured, often to the point of being completely scripted. • Interviewers’ judgements are structured by rating scales, checklists etc. • Interviewers are required to rate each answer immediately it is given, to try to limit the tendency to form global impressions of applicants. • The interviewers do not spend any time telling the candidate about the organisation; this is done separately. • Interviewers are not allowed to discuss candidates between interviews. • In some systems, the interviewers are not given any information about the candidates before the interview; they do not see application forms or CVs in case these create any bias or preconceived notions. • In some systems, the interviewer is not allowed to ask any follow-up, probing or clarifying questions, on the grounds that this introduces differences between interviews and interviewers, and so reduces reliability. • The traditional last phase of the interview—asking the interviewee if he/she has any questions—is sometimes omitted, on the grounds that interviewees could bias the interviewers by asking a silly question. (Applicants get the chance to ask questions on some other occasion, where they are not being formally assessed.) • In some systems, applicants are not permitted to ask questions earlier in the interview, as they sometimes use this to take control, and guide the interview away from areas they do not want to discuss.
STRUCTURED INTERVIEW SYSTEMS There are several structured interview systems in current use, devised in North America, Canada and Germany: • Situational Interview, devised by Gary Latham in Canada (Latham et al., 1980). • Patterned behaviour description (PBD) interview, devised by Tom Janz in the USA (Janz, 1982).
0470861630_11_cha10.fm Page 208 Thursday, December 2, 2004 8:33 PM
208
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Comprehensive Structured Interview, devised by Michael Campion in the USA (Campion et al., 1988). • Multimodal interview, devised by Heinz Schuler in Germany (Schuler and Moser, 1995). • Empirical interview, devised by Frank Schmidt in the USA (Schmidt and Rader, 1999). The longest established systems are Situational Interviews, and patterned behaviour description interviews, which have some features in common, but also some important differences. Some structured interview systems are more elaborate, and have a number of sections intended to serve different purposes.
Situational Interview Gary Latham and colleagues devised the Situational Interview system in the Canadian lumber industry. Like nearly all structured interviewing systems, it starts with a detailed competence analysis, which ensures that all the questions are job related. Information is collected about particularly effective or ineffective behaviour at work, using critical incidents. A typical incident reads: The employee was devoted to his family. He had only been married for 18 months. He used whatever excuse he could to stay at home. One day the fellow’s baby got a cold. His wife had a hangnail or something on her toe. He didn’t come to work. He didn’t even phone in.
Hundreds, even thousands of such incidents are collected. Panels of experts, managers, supervisors and psychologists then sort the incidents into themes, for example responsibility. These themes form the basis of the competence framework used for the job: the list of skills or characteristics good employees have, and poor ones lack. The Situational Interview is composed by selecting one or more incidents for each competence to be assessed, and rewriting them as interview questions: Your spouse and two teenage children are sick in bed with a cold. There are no friends or relatives available to look in on them. Your shift starts in three hours. What would you do in this situation?
The expert panels who provide the incidents of good and poor work also agree benchmark answers for good, average and poor workers: a. I’d stay home—my spouse and family come first. (poor) b. I’d phone my supervisor and explain my situation. (average) c. Since they only have colds, I’d come to work. (good)
The way the questions are generated virtually guarantees they are job related. This makes the interview more effective, and less likely to give rise to complaints.
0470861630_11_cha10.fm Page 209 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
209
At the interview, the questions are read out, and the candidate replies and is rated against the benchmarks. The interviewers do not prompt the candidate, nor do they follow up or probe in any way. The Situational Interview looks forward, asking candidates what they would do on some future occasion. This means the Situational Interview uses hypothetical questions—‘What would you do if . . . ’—which some traditional interview lore advises against. There are six steps in constructing a Situational interview system: 1 Carry out a competence analysis using critical incident technique. 2 Select one or more critical incidents per competence. 3 Use an expert panel to convert the critical incidents into ‘What would you do if . . . ’ questions. 4 Use the expert panel to devise a five-point scoring guide for each question. 5 Review the questions to ensure comprehensive coverage of your competences. 6 Carry out a pilot study to eliminate questions that do not differentiate applicants, or where interviewers cannot agree on the rating to give.
Patterned Behaviour Description Interview Patterned behaviour description (PBD) interviews also start with analysing the job by collecting critical incidents to identify competences, but differ from the Situational Interview in two ways. Firstly, PBD interview questions look back, focusing on actual behaviour that occurred in the past (whereas the situational interview looks forward, asking about hypothetical situations applicants might encounter in the future). Tom Janz, who invented the method, argues that past behaviour is the best predictor of future behaviour, so interviewers should not ask about applicants’ credentials, experience or opinions, but about what they actually did on past occasions relevant to what we are trying to predict. Critics note that questions about past work behaviour are not always possible for people with no employment experience such as school leavers or new graduates. Competence analysis indicates that ability to get on with others is important, so a critical incident is rewritten as a PBD interview question: We all experience some difficult times with people we work with. Tell me about the most difficult time you have had with a colleague.
The questions are mostly phrased in superlatives: the most difficult customer, your biggest problem, your greatest achievement. Secondly, the PBD interviewer plays a more active role than the situational interviewer, being ‘trained to redirect [applicants] when their responses stray from or evade the question’ (Janz, 1982). The interviewer wants an account of what the interviewee actually did and said on a specific occasion, not a description of what
0470861630_11_cha10.fm Page 210 Thursday, December 2, 2004 8:33 PM
210
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
they usually do, or think people in general should do. To help get the right information, the interviewer is provided with a list of prompts: • • • •
What led up to this event? How did you approach the situation? What did you say? What did your colleague say in reply?
Comprehensive Structured Interview Comprehensive Structured Interviews are more elaborate; they have four separate sets of questions: • Job knowledge. For example, ‘Explain to me the concept of present value, and provide an example of a business application’, or for a blue-collar job ‘Explain why you would clean all the components of a piece of machinery before re-assembling it’. • Job simulation. For an HR person, who is required to administer written tests: ‘Please read these exam instructions aloud to us, as if you were reading them to a large candidate group’. • Worker requirements. Assessed by questions like: ‘Some jobs require climbing ladders to a height of a five-storey building and going out on a catwalk to work. Give us your feelings about performing a task such as this’, or for office workers ‘Please describe you previous work experience of preparing detailed financial reports’. • Situational questions. The fourth section uses the Situational Interview format described above. As in the Situational Interview, no attempt is made to probe or follow up; the emphasis is on consistency of administration.
Multimodal Interview The Multimodal Interview, devised in Germany by Heinz Schuler, originally for selecting bank clerks, is probably the most elaborate system. It has eight sections, including: • an informal rapport-building conversation; • a self-presentation lasting 3 to 5 minutes in which the applicant talks about his/her background and career expectations; • standardised questions on choice of career and organisation; • questions on past experience, for example of working in groups; these probe into problems that arose and what the candidate did to solve them; • realistic job preview, where the interviewer tells the applicant about the less attractive parts of the job;
0470861630_11_cha10.fm Page 211 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
211
• situational questions, about how applicants would deal with problems that might arise in banking work; • applicant’s questions. The multimodal interview deals with the past versus future behaviour issue, by covering both, in two separate sections.
Empirical Interview This system also starts with a competence analysis; good performers are interviewed to identify 10–15 themes in effective performance, e.g. teamwork. An expert panel next develops a pool of a hundred or more possible interview questions. This pool of possible questions is then tested empirically—hence the system’s name. The researchers take 30 outstanding and 30 unsatisfactory performers from people actually doing the job, ask them all the draft questions, then select the questions that best distinguish good from poor employees. The empirical interview is eclectic. Unlike other structured interview systems, it does not confine itself to one particular type of question: past behaviour, future behaviour etc. All types of questions are tried out, and used if they work in the empirical keying phase, even questions one might dismiss as too vague, such as ‘Are you competitive?’. The empirical interview has some further radically different features: • Applicants are interviewed by telephone, which saves on travel costs. • The interview is tape recorded. It is strange how seldom interviews are recorded, given how difficult it is to remember everything the applicant says, or to take accurate and detailed notes. Recording can protect the employer in case of subsequent dispute about who said what. • The recorded interview is scored later by someone else. Separating the role of interviewer and assessor allows the assessor to concentrate on what the applicant is saying, without having to worry about composing the next question or managing the interview. The empirical method was first devised for writing personality questionnaires. In both approaches, questions are retained only if they succeed in distinguishing one group from another, regardless of their apparent relevance, or theoretical relevance. It will be clear by now that structured interviewing goes a lot further than following the ‘seven-point plan’, or agreeing who asks what before the interview starts. Setting up a true structured interview system is a major undertaking, which will take a lot of time, and cost a lot of money (less if the employer already has a well developed set of competences, based on a thorough competence analysis).
Recommendation: Bring in experts to devise strucutured interview systems for you.
0470861630_11_cha10.fm Page 212 Thursday, December 2, 2004 8:33 PM
212
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
SUCCESS OF STRUCTURED INTERVIEWS Reviews of interview research confirm that structured interviews achieve far better results that conventional interviews. Figure 10.1 summarises numerous studies on the structured interview, and shows that validity for structured interviews is twice that for unstructured. Structured interviews work equally well, whether there is one interviewer or several (whereas unstructured interviews give better results with a panel). Structured interviews based on a formal competence analysis (as opposed to an ‘armchair’ job analysis rather than none at all) achieve even better results. These results indicate that the considerable extra cost of structured interview systems is justified. Structured interviews select good applicants accurately; they are one of the most successful selection methods around. The conventional interview, by contrast, is a fairly poor way of finding good people.
Situational versus Patterned Behaviour Description Accumulating research on structured interviewing makes it possible to start comparing different systems. A comparison of 31 investigations of the Situational Interview and 20 of the Behaviour Description Interview found overall virtually no difference (Huffcutt et al., in press). However, Figure 10.2 indicates that the Situational Interview may not be suitable for high complexity jobs such as plant manager, and might be more effective for selecting for ‘low complexity’ jobs such as lumber worker.
Empirical Interview An extensive programme of research shows the empirical interview is particularly successful (Schmidt and Rader, 1999). Figure 10.3 shows the empirical interview has good accuracy overall, assessed against the conventional supervisor rating
Unstructured interview
Structured interview
Structured + competence analysis
0
0.2
0.4
0.6
Figure 10.1 Validity of conventional and structured interviews Source:
Data from Wiesner and Cronshaw (1988).
0.8
1
0470861630_11_cha10.fm Page 213 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
213
0.35
Validity
0.30 0.25
Situational interview
0.20
Behaviour description interview
0.15 0.10 Low
Medium
High
Job complexity
Figure 10.2 Situational and behaviour description interviews for jobs of high, medium and low complexity Source:
Data from Huffcutt et al. (2004).
Supervisor rating
Production
Sales
(Low) Absenteeism
Tenure 0
0.1
0.2
0.3
0.4
0.5
Figure 10.3 Valiidity of the empirical interview Source:
Data from Schmidt and Rader (1999).
criterion. Figure 10.3 shows that the empirical interview can also predict other aspects of work performance: production, sales and staying with the organisation. Only absenteeism is less well predicted. As Schmidt and Rader note, no other previous research had shown the interview able to predict sales, production or absence.
Recommendation: When choosing a structured interview system, choose one that you think will suit your organisation. They all seem to work equally well.
0470861630_11_cha10.fm Page 214 Thursday, December 2, 2004 8:33 PM
214
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Types of Question We now have enough research to make a comparison between forward-oriented or hypothetical questions and past-oriented or experience-based questions. A summary of 49 separate studies indicated that the two types of question give equally good results (Taylor and Small, 2002). The two types of questions may assess fundamentally different things: situational questions may assess what people know, i.e. ability, whereas behaviour description questions may describe what people have done, so may reflect typical performance, i.e. personality, more.
What the Structured Interview Assesses Recent research has compared structured interview ratings with other assessments, usually psychological tests, which starts to tell us what the structured interview is actually assessing (which is not necessarily what it is intended to assess). Recall that the traditional interview turns out to be assessing mental ability to a surprising extent. • Mental ability. Structured interview ratings show some overlap with mental ability test scores, but not as much as traditional interviews. PBD interviews seem particularly successful at avoiding overlap with tested mental ability. • Job knowledge. Structured interviews correlate well with paper and pencil job knowledge tests. Critics might say this is not surprising because many structured interviews look very like oral job knowledge tests. • Personality. Structured interview ratings are less affected by personality, than conventional interviews, so are more than just disguised personality tests. • Social skill. Structured interviews correlate surprisingly highly with the applicant’s social skill, which may suggest that structured interviews do not entirely succeed in excluding irrelevant considerations from the interviewers’ decisions.
Incremental Validity Can the structured interview improve the prediction made by other methods, by covering aspects of work performance that other selection methods fail to cover? Frank Schmidt argues that unstructured interviews will add little to mental ability tests, because unstructured interviews are highly correlated with mental ability in the first place. Structured interviews, on the other hand, succeed in being more than disguised intelligence tests, so should add something extra. In fact the combination of mental ability test and structured interview is one of the three that Frank Schmidt recommends selectors to use. Recent research confirms that a structured interview has considerable incremental validity over tests of mental ability and conscientiousness (Cortina et al., 2000).
0470861630_11_cha10.fm Page 215 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
215
FAIRNESS AND ADVERSE IMPACT Fairness Structured interviews are designed both to be fair and to look fair. Conventional interviews are often criticised because different applicants are asked different questions. The structured interview’s use of a script prevents this and ensures all applicants’ interviews are the same. Structured interviews should prevent the interviewer wandering off into irrelevant and possibly dangerous areas. Two surveys of fair employment cases in the USA confirmed the value of structured interviews (Terpstra et al., 1999; Williamson et al., 1997). The first analysed 130 court cases, to identify features of the interview that helped employers defend themselves against claims of unfairness. The analysis identified two main themes. The first was structure: the use of standard sets of questions, limiting interviewer discretion and ensuring all interviews are the same. This confirms directly that structured interviews do indeed protect the employer again claims of unfairness. The second was objectivity and job relatedness, which structured interviews are more likely to have, because they are based on detailed competence analysis. This also protects the employer against accusations of unfairness. The second survey analysed frequency of complaints about a variety of selection assessments, comparing the number of complaints made with an estimate of how many one would expect, given the number of employers using each assessment method. The survey shows that conventional interviews are complained about twice as often as one would expect, but found the exact opposite for structured interviews; they are complained about only half as often as would be expected. The survey found a second, even more compelling, reason to commend structured interviews to employers. When structured interviews were the subject of complaint, the employer always won the case. This did not happen with unstructured interviews, where 40% of employers lost their cases.
Adverse Impact Structured interviews are popular in North America partly because it is thought they avoid adverse impact. Research, however, finds a mixed picture.
Gender The most recent review found that structured interviews do not create adverse impact on females (whereas conventional interviews do to some extent) (Huffcutt et al., 2001). However, the research reviewed is nearly all North American. One cannot safely assume similar results will be found in other countries, given how widely attitudes to gender vary. Employers should therefore monitor selection by structured interview as carefully as any other method.
0470861630_11_cha10.fm Page 216 Thursday, December 2, 2004 8:33 PM
216
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Ethnicity Here we have a difference in expert opinion, and consequently uncertainty. One very recent analysis concluded that structured interviews do not create adverse impact on non-white Americans (whereas conventional unstructured interviews do, especially interviews that assess intellectual ability and experience) (Huffcutt et al., 2001). However, a second review by Philip Roth et al. (2002) has questioned this comforting conclusion. His analysis shows structured interviews do create some adverse impact on African Americans—nowhere near as much as some written tests, but enough to create a problem all the same. Furthermore, Roth argues that the difference found in his analysis may be an underestimate. He points out that adverse impact computations for interviews are often made in organisations where the interview is used after applicants have already been screened by written tests. Pre-screening by written tests will tend to screen out the less able, and therefore lead to an underestimate of the adverse impact of the structured interview. Roth made statistical corrections for this pre-screening effect, and concluded structured interviews might actually be creating a fairly large adverse impact, although still not as great as that created by mental ability tests.
Recommendation: If using a structured interview system, check it carefully for adverse impact.
POSSIBLE PROBLEMS WITH STRUCTURED INTERVIEWS Faking Good and Cheating We do not at present know much about how fakable structured interviews are. Can people work out the right answers to give and describe behaviour they did not actually exhibit—in a PBD interview—or choices they would not actually make— in a Situational Interview? Could the interviewer(s) tell if they did? The authors of structured interview systems sometimes say their questions are phrased to avoid suggesting the ‘right’ answer, but we do not know if they succeed. Applicants can seek help about ‘right’ answers from college careers offices and from how-to-beinterviewed books and websites, which abound. Cheating is also a possibility. Structured interviews tend to have fixed scripts, which cannot be varied, and are not too long to memorise. If applicants memorise the questions and pass them on, the next intake of applicants could give themselves an unfair advantage by working out model answers at their leisure. This risk is probably greater where the system is used for promotion rather than selection.
Recommendation: If using a structured interview system, be alert to the possibility of faking good and cheating.
0470861630_11_cha10.fm Page 217 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
217
Resistance within the Organisation The structured interview deprives interviewers of most of their traditional autonomy, and takes much of the ‘fun’ out of interviewing. Interviewers have to follow a script, have to use rating scales, and are not allowed to chat with candidates or tell them about the organisation. They are not allowed to discuss applicants between interviews. In some forms of structured interviewing they are pretty well reduced to being a substitute for a written test. In others, such as the empirical interview, their role is confined to collecting information; someone else evaluates it and makes the decision. This creates a risk that interviewers will deviate from their prescribed role, unless they are closely monitored. In many organisations the managers who give the interviews are more powerful than the HRM department, which brings in structured interviewing and tries to maintain its quality. These points suggest structured interviewing systems may not always achieve such good results in practice
Acceptability to Applicants Selection methods have to be acceptable to applicants, especially when applicants are in short supply. Structured interviews are designed to be accurate and fair, so in theory should be more acceptable to applicants than traditional interviews, which can be very hit and miss affairs. However, structured interviews are new and unfamiliar, so applicants may be less willing to accept them. Some structured interviews must be rather strange and unsettling experiences for the applicant. A panel of interviewers read out lengthy complicated questions, then sit in silence waiting for a reply. They do not try to set you at your ease. They may not indicate whether the reply has given the information they wanted, whether it was too detailed or not detailed enough etc. If you ask for clarification of a question, the interviewer may offer no more help than to read it out again.
Recommendation: When introducing a structured interview system, explain it carefully to colleagues and applicants. For colleagues emphasize the importance of ‘keeping to the script’.
Interview or Spoken Questionnaire? Very structured interviews blur the distinction between interview and paper and pencil test. If the interviewer reads from a script and does not probe or otherwise interact with the candidate, why is he/she there at all? Why not print the questions in a question book, and have candidates answer them in writing? This would be much quicker and cheaper, because all the applicants could be assessed at the same time. Frank Schmidt gives one answer to this question—interview format does not give people time to prepare a carefully thought out answer, which might not be entirely
0470861630_11_cha10.fm Page 218 Thursday, December 2, 2004 8:33 PM
218
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
frank. Another reason is that written tests tend to create more equal opportunities problems than interviews, especially if the selection test requires more reading ability that the job itself.
USEFUL WEBSITES www.bdt.net. Devoted to Janz’s behaviour descrption interview. www.spb.ca.gov/int1.htm. How to write structured interviews, by California State Personnel Board.
CASE STUDY Angie Blake is HR Manager for Newpharm, a pharmaceutical company producing generic drugs and its own specialist hypertension products.
Memo to Interview Panel The company is recruiting sales associates to cover potential expanding markets in the South East. Two posts are on offer and a shortlist of five applicants has been chosen from the initial response of 87 completed applications. The applicants have received a letter of invitation to interview together with practice test leaflets covering the psychometric testing session, numerical and verbal critical reasoning and a personality inventory. Angie has planned the day as an assessment/selection centre, with testing in the morning and after lunch, a 20-minute structured interview and a 10-minute presentation. You are invited to join the interview panel. Each candidate will be asked to present immediately after the interview. The afternoon is divided into 35-minute slots, allowing 5 minutes change-over time. The interview panel will be Angie, Regional Sales Manager Joanne Croft and Area Sales Manager Kevin Blake. Another Area Sales Manager, Lee Newton, will be sitting in all day as an observer, undergoing induction as an interviewer. Joanne and Kevin are trained interviewers. Angie will provide a structured interview form for use by panel members. At the end of the interview, candidates will be requested to make a short, 10-minute presentation on the question ‘How I can increase sales of Newpharm products’. Please meet at 12:30 p.m. for lunch in the restaurant. Thank you for your cooperation on this important day. Michael Jones HR Director
0470861630_11_cha10.fm Page 219 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
219
Newpharm Standard Interview: Structured questions Panel members: Observer: Date: Candidate name: Rating 5 = very good, 4 = good, 3 = acceptable, 2 = poor, 1 = very poor. Question
Rating
1
Why did you enter the sales world?
12345
2
How would you gain access to busy GPs?
12345
3
What has been the sales achievement of which you 1 2 3 4 5 are most proud?
4
What is the biggest challenge you would face selling 1 2 3 4 5 for Newpharm?
5
What motivates you in work?
12345
6
What motivates you out of work?
12345
7 How would you gain access to busy retail managers?
12345
8
What do you mean by activity levels?
12345
9
How important is size of order?
12345
How important is servicing customers?
12345
10
Comment
Newpharm Situational Interview Rating 5 = very good, 4 = good, 3 = acceptable, 2 = poor, 1 = very poor.
Question 1 It is possible, by arrangement to work from home. On this particular day, your partner is in bed, ill with flu; your son is competing in school sports day. You have arranged two sales and one service call that afternoon, 60 miles away. What would you do in this situation?
0470861630_11_cha10.fm Page 220 Thursday, December 2, 2004 8:33 PM
220
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
(Continued ) Marking guide for answers I would ask my son’s uncle to watch him at sports day and make my appointments. I would try to cram my appointments into the morning. I would rearrange my appointments.
very good
5
acceptable very poor
3 1
Question 2 Your area manager wants you to make more sales calls to open new business. You argue that you can bring in business as well by effective servicing. Your manager says, ‘as long as you make 10% above last month’s figures you can do what you like!’ How could you make that 10% increase? Marking guide for answers I would increase my prospecting activity rate to new customers and increase my service calls. I would try to increase prospecting to new customers I would make more service calls
very good
5
acceptable very poor
3 1
Newpharm Behaviour Description Interview: Questions
Question 1 Hitting your monthly sales target is always the bottom line for sales people. Tell me about the time when you did not hit your monthly target. Probes • • • •
What led to your not hitting the target? How did you try to correct the problem? What was the outcome? How often has this happened to you in the last two years?
Question 2 Think of the most difficult new client contact you ever encountered. Tell me about it.
0470861630_11_cha10.fm Page 221 Thursday, December 2, 2004 8:33 PM
STRUCTURED INTERVIEWS
Probes • • • • •
What made this contact particularly difficult? How did you try to deal with the difficulty? How did the contact respond? What did you learn from this episode? What percentage of new clients are like this?
221
0470861630_11_cha10.fm Page 222 Thursday, December 2, 2004 8:33 PM
0470861630_12_cha11.fm Page 223 Thursday, December 2, 2004 8:33 PM
CHAPTER 11
Other Special Assessment Methods
OVERVIEW • Background checks range from checking educational qualifiications through to using people’s credit records. • Handwriting analysis is widely used in France, but evidence for its value is not convincing. • Educational achievement predicts work performance in the short term but not in the long term. • Educational achievement requirements create adverse impact in the USA. • Work sample tests assess work performance directly and avoid inferences based on abilities or personality traits. • Work samples have good validity and are acceptable to candidates, but tend to be limited to fairly specific skills. • In-basket tests are useful for management. • Self-assessments can be valid indicators of performance but probably not in most selection contexts. • Physical ability tests will create quite large adverse impact for gender, but can be used if carefully constructed, validated and normed. • Drug use testing needs to be thought about carefully. Who will be excluded, how and why? What will people think of it?
INTRODUCTION Besides the main classes of assessment method described in Chapters 2–10, there are a range of more specialised methods. Some are useful, some less so. Most are suitable only for fairly specific or specialised aspects. There are nine classes of miscellaneous selection test that do not fit neatly into any other main category: • background checks • graphology
0470861630_12_cha11.fm Page 224 Thursday, December 2, 2004 8:33 PM
224
• • • • • • •
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
polygraph education work sample, trainability and in-tray tests self-assessments physical tests drug use testing biofeedback.
Education is probably the oldest of these, having been in use in Imperial China for many centuries to select public officials. Work samples can be traced back to Hugo Munsterburg’s work with the Boston street car system in 1913. In-tray tests date back to 1957. Graphology has been widely used in France for some time. Self-assessments are slightly more recent. Formal physical tests have replaced more casual physical assessments since the 1970s to conform with fair employment legislation. Background checks are used in North America, especially in screening. Drug use testing is the most recent, and most controversial, of the eight.
BACKGROUND Background Investigation or Positive Vetting The application form contains information the applicant chooses to provide about him/herself. Some employers also make their own checks on applicants’ education and employment history, possibly even reputation and lifestyle. This is common practice in the USA, where specialist agencies exist to carry it out. In the UK, background investigations are recommended for child care workers, and are used for government employees with access to confidential information, a process known as positive vetting. Some employers in some sectors are rumoured to check whether applicants are union activists. Checking reputation and lifestyle can be done in several ways. Some employers or prospective employers make unannounced home visits to see where and how people live; others make enquiries locally about someone’s habits; access to the applicant’s credit card details will tell the employer what he/she spends their money on; the police keep records of peoples’ ‘known associates’. In the words of the old saying: you can judge a man by the company he keeps. Some employers assess applicants through driving record or financial and credit history. The rationale is that credit and driving checks assess ‘responsible attitude’. Credit checking agencies are used by many organisations, to ensure that customers can or will pay; in the UK the information should not be used to assess job applicants.
Criminal Records Many employers are reluctant to employ someone with a criminal record. In the USA background checking agencies offer criminal record checks. Critics say these
0470861630_12_cha11.fm Page 225 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
225
are not very thorough, because the records are not centralised; a thorough check would need to cover hundreds of sets of records scattered throughout the entire USA. Checking criminal records in the UK could be easier because the records are centralised, but is actually more difficult. Until recently employers had no right of access at all to criminal records, so were limited to asking applicants if they had any convictions, or looking for suspicious gaps in employment histories. Some employers were said to employ former police officers who could gain unofficial (and illegal) access to criminal records. Recently the Criminal Records Bureau has been set up to give some employers access to records, especially where the work gives employees access to children (a fairly broad category, covering, for example, anyone who works in any school). The system is not working entirely satisfactorily yet, as the Soham murder case, outlined in our first case study at the end of this chapter, shows.
Recommendation: If you make background checks, use a competent agency, and be sure to stay within the law.
GRAPHOLOGY Who was ‘Jack the Ripper’? And what sort of person was he? One of the very few sources of information we have is the letters he sent. The letter in Figure 11.1 is the most likely to be genuine: a graphologist says about its author: ‘a hail-fellow-well-met who liked to eat and drink; who might attract women of the class he preyed on by an overwhelming animal charm. I would say in fact he was a latent homosexual . . . and passed as a man’s man . . . capable of conceiving any atrocity and carrying it out in an organised way’ (McLeod, 1968). Jack the Ripper has never been identified, so we cannot check if the graphologist got it right. Handwriting can be a sign or a sample. The manager who complains he/she cannot read an applicant’s writing judges it as a sample; legible handwriting may be needed for the job. The graphologist who infers repressed homosexuality from handwriting interprets handwriting as a sign of something far removed from putting pen to paper. Graphology is widely used in personnel selection in France, by 85% of employers; far fewer employers in other countries use it. If handwriting gave an accurate assessment of personality or work performance, it would be very cost effective, because employers could assess applicants from their application forms. However, the research evidence is not promising (Neter and Ben-Shakhar, 1989). Graphology does not give consistent results: two graphologists analysing the same handwriting sample agree very poorly about the writer’s personality. Nor is there much evidence that graphology can predict work performance: graphologists’ assessment of estate agents are completely unrelated either to supervisor ratings or sales figures.
0470861630_12_cha11.fm Page 226 Thursday, December 2, 2004 8:33 PM
226
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Figure 11.1 A letter attributed to Jack the Ripper
Graphologists need ‘spontaneous’ writing, so they ask people to write pen pictures of themselves; copying standard text will not do. This creates a problem, however: the graphologist’s assessment is not solely based on handwriting. The content of the Ripper letter tells us quite a lot about the writer: he enclosed half a human kidney and claims to have eaten the other half. When working from pen pictures, non-graphologists, who know nothing about analysing handwriting, do better than graphologists, suggesting they interpret what people write, not how they write it, and interpret it better than the graphologists. With neutral scripts, neither group does better than chance, suggesting either that there is no useful information in handwriting, or that no one presently knows how to extract it.
0470861630_12_cha11.fm Page 227 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
227
Recommendation: Be wary of graphology as an assessment method.
POLYGRAPH The polygraph, or lie detector, was once used widely in the USA, principally to check staff who have the opportunity to steal. The UK government considered using it to test the loyalty of government employees, but thought better of it. US law now prohibits use of the polygraph for most types of employee, which may account for the increasing popularity of paper and pencil honesty tests.. In polygraphy the applicant’s heart rate, respiration and electro-dermal activity are monitored. The applicant is then asked a mixture of harmless ‘control’ questions—‘Is your name John Smith?’—and test questions—‘Have you ever stolen from your employer?’, ‘Are you a member of Al Qaeda?’. If the test questions make the heart beat faster, the breathing change or the electrical activity of the sweat cells in the skin increase, the applicant may be lying. The principle of the polygraph is sound: anxiety does cause changes in breathing, pulse and perspiration, which are very hard to control. Unfortunately, the polygraph’s practice has difficulties: • A high rate of false positives—people who appear to have something to hide but haven’t: they may be nervous about the polygraph, not because they are lying. • The polygraph is likely to miss criminals and psychopaths, because they do not see lies as lies or because they do not respond physically to threat. • The polygraph might well miss a real spy, who has been trained to mask physical reactions.
Recommendation: Be wary of using the polygraph as an assessment system.
Other techniques for finding out what people are ‘really’ like or are ‘really’ thinking include voice stress analysis and pupil dilation.
Pupil Dilation The pupil of the eye gets bigger (dilates) if the person is interested in what they are looking at. It is said that Chinese jade merchants used this centuries ago to identify which pieces customers were most interested in. Recently pupil dilation has been proposed as a technique for detecting paedophiles applying for work that gives them access to children. Applicants are shown photographs of people of various ages and both genders. If the applicant shows more interest, through dilated pupils, in 10-year-olds than 20-year-olds, he is suspect.
0470861630_12_cha11.fm Page 228 Thursday, December 2, 2004 8:33 PM
228
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
‘Magical’ methods of this sort have several problems: • They can be intrusive and offputting to applicants. Intrusion may be justified sometimes. • They do not work very well. A one-off demonstration by the people selling the system is not sufficient; we need at least a dozen demonstrations by independent researchers that the system actually works, when selecting real employees. We do not have that for pupil dilation as a screening test for child care work. • Pupil dilation is certain to have a high false positive rate: pupil dilation means you are interested in what you see, but does not tell us what sort of interest. Many people like children without having deviant sexual impulses towards them. Paul Ekman at the University of California has spent a lifetime researching lying and deception, and has achieved a very valuable insight: people give themselves away when they lie, but everyone has their own particular way of showing they are lying (Ekman and O’Sullivan, 1991). Some people cannot look you in the eye; others’ voices tremble; some people’s hands shake etc. People who know you well—family and friends—can tell if you are lying, because they know how you give yourself away; the personnel manager who does not know you at all will find it much more difficult. Methods based on one cue to deception—voice, pupil dilation etc.—may also fail.
Recommendation: Be wary of using systems intended to tell you if someone is lying.
EDUCATION By education we usually mean marks or grades, in class or examination. The details vary from country to country. Education can be ‘test’ data, which is hard to fake. Some employers are content to let it be self-report data, because they do not check it. Employers in the UK often specify so many A Level passes or a university degree; professional training schemes almost always do. American employers used to require high school graduation or a college degree, but most no longer do so, because of adverse impact problems. Common sense suggests that people who do well at school or university are more able, mature and more highly motivated, so they fit into work more easily. It is a lot easier and cheaper to ask how well someone did at school than to try to assess ability, maturity and motivation. Early analyses of American college grades found only weak relationships with work performance (Baird, 1985). There are low positive correlations between school or college achievements and various indices of occupational success, such as scientific output, performance as a physician or success in management. The most recent analysis of 71 separate studies linking grades to work performance found the
0470861630_12_cha11.fm Page 229 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
229
Correlation between grade and work performance
0.6 0.5 0.4 0.3 0.2 0.1 0 1 year
2–5 years
6+ years
Figure 11.2 College grades and work performance over time
link is strong at first, but decays very rapidly. Figure 11.2 shows a strong relationship one year after graduation, which has fallen to not far off zero after six years. In the USA, education tests can create serious adverse impact. Some US minorities do less well at school, so are more likely not to finish or graduate from high school. Similar differences are found at university: a large difference in college grades that would create major adverse impact. In 2000, an analysis of 7000 students found the difference increases sharply from first to third year, presumably because of students dropping out (Roth and Bobko, 2000). By the third year, the difference is almost as great as that found for mental ability tests. An American employer who recruits only graduates may therefore be excluding some minorities to a large extent. Adverse impact means the employer has to prove the job really needs the educational level or qualifications specified. This has usually proved difficult in the USA; a review of 83 US court cases found educational requirements were generally ruled unlawful for skilled or craft jobs, supervisors and management trainees, but were accepted for police and academics (Meritt-Haston and Wexley, 1983). We seem generally rather short of comparable data on possible differences in school or university achievement in the UK.
Recommendation: If you use education as a selection assessment, check for possible adverse impact.
WORK SAMPLE TESTS Almost all adults have completed a work sample test: the driving test, an hour spent driving a real car on real roads, through real traffic, including in the UK a couple of standard driving manoeuvres such as a three-point turn.
0470861630_12_cha11.fm Page 230 Thursday, December 2, 2004 8:33 PM
230
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Wayne Cascio devised a set of 21 work samples for municipal employees in Miami Beach, covering a wide range of manual, clerical and administrative jobs, from electrician’s helper to parking meter technician, from library assistant to concession attendant (Cascio and Phillips, 1979). Cascio found the tests had advantages: • They were very convincing to applicants. If applicants for work as electricians had completed the wiring test correctly, the lights lit up; if the wiring was not correct, and the bulbs did not light, the applicant could not deny he/she had made a mistake. • Some had realistic job preview built in; quite a few would-be sewer mechanics changed their minds after visiting an underground sewage chamber. Work sample tests are justified by behavioural consistency theory, which states two principles: ‘past behaviour is the best predictor of future behaviour’ and ‘like predicts like’. Close correspondence between content of test and content of the actual job ensures higher validity, as well as less scope for legal challenge. Mental ability tests assess the applicant’s general suitability and make an intermediate inference—this person is intelligent so he/she will be good at widget stamping. Testing the employee with a real or simulated widget stamper makes no such inference (nor is widget-stamping ability quite such an emotive issue as general mental ability).
Scoring Work samples have to be observed and rated. Sometimes a global rating is made; sometimes ratings of various aspects of task completion are made, and possibly averaged. Often the rater will be provided with checklists (Figure 11.3). Work samples can usually achieve excellent inter-rater reliability.
Validity Several reviews have all concluded that work samples achieve good validity. A 1972 analysis of tests in the American petroleum industry found work samples achieved validities of 0.30–0.40 for operating and processing, maintenance and clerical jobs (Dunnette, 1972). Another review found work samples the best test (0.54) by a short head for promotion decisions, where, however, all tests give fairly good results (Hunter and Hunter, 1984). Ivan Robertson has reviewed work samples from a wider body of research, taking in US wartime studies and British research, and found work sample validity quite good (an average correlation of 0.39), but with a very wide range (Callinan and Robertson, 2000).
Recommendation: Use work sample tests for jobs that require specific skills.
0470861630_12_cha11.fm Page 231 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
231
• Find jack • Find spare wheel • Check tyre on spare wheel • Remove hub cap • Use wheelbrace to loosen wheel nuts • Find jacking point • Use jack to raise car until wheel is off ground • Remove wheel nuts and place in hub cap • Remove wheel
Figure 11.3 Part of a checklist for a work sample test
Incremental Validity Michael Campion (1972) described a set of work sample tests for car mechanics. He started with a thorough job analysis, then selected four tasks: installing pulleys and belts, disassembling and repairing a gear box, installing and aligning a motor, and pressing a bush into a sprocket and reaming it to fit a shaft. Campion compared his work sample tests with a battery of paper and pencil ability tests, including the Bennett Mechanical Comprehension Test. The work samples predicted supervisor ratings of performance fairly well, whereas the paper and pencil tests predicted very poorly. Work samples correlate only moderately well with mental ability, which implies they should generally make a useful addition to validity. The combination of work sample and mental ability tests is one of the three that Frank Schmidt recommends employers to consider using.
Domain Validity Work samples can work by domain validity in some circumstances. If the employer can list every task the employee needs to be able to do, and can devise work samples for every task, the employer can then say ‘X has mastered 80% of the job’s demands’. X is being compared here with what the job requires, not with other applicants, as is usually the case in selection. This avoids a lot of problems: • The employer does not need large samples; X may be the one and only person who does that job but we can still say ‘X has mastered 80%’ of it. • If X has only mastered 60% we can send X on more training courses until he/she has mastered enough of the job to be allowed to start work.
0470861630_12_cha11.fm Page 232 Thursday, December 2, 2004 8:33 PM
232
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• We might still face adverse impact problems, but approaching them from the point of view of how much of the job the person can do competently may be easier to defend in court than conventional statistical validation. The problem with the domain approach is expense and practicality. Few employers can afford to describe their jobs in sufficient detail, and very many jobs do not lend themselves to such detailed descriptions. The American military have listed the critical tasks US Marines have to do, and devised work samples to assess them.
Trainability Tests True work samples can only be used when applicants have already mastered the job’s skills; there is no point giving a typing test to someone who cannot type. Trainability tests are work samples that assess how well the applicant can learn a new skill. Trainability tests are widely used in Skillcentres, for training or re-training people, run by the (UK) Manpower Services Commission. The instructor gives standardised instructions and a demonstration, then rates the trainee’s efforts using a checklist: ‘doesn’t tighten chuck sufficiently’, ‘doesn’t use coolant’, ‘doesn’t set calibrations to zero’ etc. Trainability tests achieve good results for many types of work: bricklaying, carpentry, welding, machine sewing, forklift truck driving, fitting, machining and even dentistry. In the USA, AT&T uses trainability tests, called minicourses, to select staff for new technology training, and reports good correlations between training performance and job performance. A summarising review of trainability tests found they predicted training success much better than job performance, although they are useful for job performance as well (Robertson and Downs, 1989). The review also found that trainability test validity fell off over time quite markedly.
Interview Work Samples Sometimes it is not practical, or even safe, to let people loose on ‘the real thing’. We would not want to test how good people are at launching ICBMs with real missiles! The American military uses interview work samples, which give equally valid results as hands-on work samples, and are much cheaper and safer. Instead of driving a real tank from A to B, the candidate stands by a simulator and explains how to drive the tank.
Law, Fairness and Work Sample Tests Work samples became popular as alternatives to legally risky mental ability tests. Work samples have validity as high as mental ability tests, but create much less adverse impact. For example Cascio’s Miami Beach work samples created no
0470861630_12_cha11.fm Page 233 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
233
adverse impact on African Americans and Hispanic Americans, unlike paper and pencil ability tests. American reviews find work samples create far smaller differences than paper and pencil ability tests for African Americans and none at all for Hispanic Americans. Work samples are readily accepted by candidates; they are less often the subject of litigation in the USA, and even less often the subject of successful litigation.
Limitations of Work Sample Tests Work samples are accurate and fair, but they have some limitations. They are: • job-specific, so a tramway system will need different work samples for drivers, inspectors, mechanics, electricians etc., which will prove expensive; • best for specific concrete skills, and do not work so well where the work is diverse or abstract or involves other people; • administered one to one, which again makes them expensive. Many also need apparatus that takes up space and needs maintaining. After you have tested 20 applicants for bricklaying, you will have 20 newly built brick walls to dispose of. (The bricklaying test uses cement that will not set, so the walls are easier to dismantle, but it is still a long and messy task to remove them.) Critics argue that work sample tests are profoundly uninformative, unlikely to retain validity over time, and difficult to modify to regain lost validity. A motor mechanic work sample may ‘combine knowledge of carburettors, skill in using small tools, and . . . reasoning ability’ (Barrett, 1992). The work sample does not separate these abilities. If car engines change, by adopting fuel injection, the work sample loses its validity. The work sample is difficult to modify because there is no information about the contribution of different elements to the whole (whereas paper and pencil tests can be analysed item by item, and items that change meaning can be omitted or changed). The same difficulty arises if the work sample creates adverse impact; the test’s users may not know what to alter to eliminate the adverse impact.
In-Tray Exercises The in-tray is a management work sample. Candidates deal with a set of letters, memos, notes, reports and phone messages: a couple of sample items are given in Figure 11.4. Applicants are instructed to act on the items, not write about them, but are limited to written replies by the assumption that it is Sunday, and that they depart shortly for a week’s holiday or business trip abroad. The in-tray does not need mastery of specific skills, so can be used for graduate recruitment. The candidate’s performance is usually rated both overall and item by item. The overall evaluation checks whether the candidate sorts the items into high and low priority, and notices any connections between items. Some
0470861630_12_cha11.fm Page 234 Thursday, December 2, 2004 8:33 PM
234
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
12 Clement Attlee Way Progress Estate Allentown 4th March 2004 Dear Mr Prior We live three miles from the Doomray Attomic Power Station. My house is due east of the station so we are down wind of it nearly all of the time. I am very worried about raydiation from the station which I know is damaging my familys health. My little girl, who is only 6 has assthma very badly so she is off school a lot. She also gets numbness in her feet especiailly in cold weather. My husband is ill too. He gets very tired, and had to give up his job with the Council because of his back. I suffer from hemmoehroids very badly, and get terrible head aches severall times a week. Our local Councilor has suggested I write to you because she says your in charge of Attomic Power Stations and could get it shut down if its harming peoples health. Yours faithefully Henrietta Root (Mrs)
Westland Association of Public Employees Eastern Region Office 101 Orwell Street Silverton The workers, united, will never be defeated J Prior Chief Executive Central Power Supply Ltd Anytown 3rd June 2004 Dear Chief Executive WAPE wishes to bring to your attention for your immediate executive action a serious situation that has arisen in the Silverton office of your organisation. The situation relates to the unacceptable behaviour of a senior manager in the aforementioned office. Numerous incidents of a unacceptable overbearing and bullying behaviour have been brought to WAPE’s attention. In the interests of brevity, I will confine myself to describing only a selection of the more salient examples. The individual in question is Mr G. Khan, Chief Quality Control Officer. Tuesday 3rd February. General Office of CPS. Mr Khan loudly criticised Mr Alan Jones in the presence of the whole office, an estimated total of 21 persons. The criticism related to Mr Jones’s failure to file correspondence in accordance with procedures laid down by Mr Khan. The criticism was in our opinion clearly designed to humiliate our member. Mr Khan pulled Mr Jones by his coat sleeve to the office window, pointed to people passing in the street, and shouted loudly enough to be clearly heard throughout the entire general office “see them—they’re paying your wages, and you’re a complete bloody waste of space”. WAPE considers this language to be unnecessarily abusive and offensive. Thursday 12th February, p.m. In Khan’s office. Sally Evans went into Mr Khan’s office to enquire about the request she had made to attend a course at CPS Head Office on Website Design. Mr Khan rudely refused the request saying “It would be a complete waste of my training budget. People like you don’t stay here long enough to justify the cost.” Our member denies this, and says she plans to continue working for CPS for foreseeable future. It might also be pertinent to note that Mr Khan’s nickname in the General Office is “Saddam”. His nickname among the manual grades is unfortunately too offensive to be repeated. The membership also offer the observation that I will convey to you that if Mr Khan talks to members of the public the way he talks to his staff, that it is no wonder he gets no co-operation.
Figure 11.4 (Continued)
0470861630_12_cha11.fm Page 235 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
235
There are other instances of bullying and harassment which we have fully documented, and for which detailed written accounts by witnesses are on file. WAPE wish to give CPS the opportunity to deal with this matter internally. However if the situation is not satisfactorily resolved within a matter of two weeks it will be necessary to proceed to protect the interests of our members. Mr Jones has since resigned from CPS. It is our view that he would have an excellent case for constructive dismissal. Yours fraternally F. Kite Area Secretary
Figure 11.4 Sample items from an in-tray test
scoring methods are more or less objective, e.g. counting how many decisions the candidate makes; others require judgements by the scorer, and focus on stylistic aspects. In-tray exercises can be scored with acceptable reliability by trained assessors. They achieve good validity. Several studies report that in-trays also have incremental validity: they add new information, and do not just cover the same ground as tests of verbal or general mental ability. They do not seem that good, however, at assessing specific competences. If we use an in-tray to try to assess empathy, delegation and time management, we are likely to find that ratings of all three are highly correlated. The in-tray’s main shortcomings arise from the ‘Sunday afternoon’ assumption: writing replies in a deserted office may be quite unlike dealing with the same issues face to face or by telephone on a hectic Monday morning. Tests that require people to write things they normally say to others have been criticised by fair employment agencies on both sides of the Atlantic.
SELF-ASSESSMENTS Gordon Allport (1937) once said ‘If you want to know about someone, why not ask him? He might tell you’. Self-assessments ask people for a direct estimate of their potential. The NEO Assertiveness scale infers dominance from the answers people give to eight questions; a self-assessment gets straight to the point: ‘How assertive are you?’—on a five-point scale. A typing test takes half an hour to test typing skill; a self-assessment simply asks ‘How good a typist are you?’.
Validity Early research compared typing tests with typists’ ratings of their own skill (Ash, 1980). The best result suggested Allport was right; people are quite good judges of their own abilities. Self-ratings of other aspects of typing skill—typing letters, tables, figures and revisions—were much less promising. Other early research found
0470861630_12_cha11.fm Page 236 Thursday, December 2, 2004 8:33 PM
236
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
self-assessments of spelling, reading speed, comprehension, grammar etc. correlated poorly with supervisor rating. Researchers also tried comparing self-assessments of mental abilities—mechanical comprehension, spatial orientation, visual pursuit etc.—with actual test scores, and found very low correlations, suggesting self-assessments could not be substituted for tests. A summarising review of 43 studies found quite good results: a true validity of 0.42 (Mabe and West, 1982). The review was able to identify three conditions where self-assessments were most accurate: • When given anonymously; • When people were told self-assessments would be compared with objective tests; • When people compared themselves with fellow workers. All three make self-assessments fairly impractical for selection. Extensive research has compared rating of work performance by self, supervisor and by peers, or colleagues at the same level. Self-ratings of work performance agree fairly poorly with others’ ratings (only around 0.35). Agreement was higher in blue-collar workers and lower in managers, suggesting that egocentric bias in rating work performance is greater in work that has less in the way of visible output. While self-assessments have sometimes been shown to predict performance quite well, hardly anyone has used them for real decisions. It seems likely selfassessments are not used because employers suppose people cannot or will not give accurate estimates of their abilities. Other studies have found people overestimate their abilities.
Recommendation: Be wary of using self-assessments as a selection assessment.
PHYSICAL TESTS Sometimes physical tests are useful in selection: • • • • •
some jobs require strength, agility, or endurance; some jobs require, or are felt to require, physical size; some jobs require dexterity; for some jobs attractive appearance is, explicitly or implicitly, a requirement; in pilot training, reaction time predicts success to a modest extent.
Strength Tests of strength are sometimes used in the UK, often in a fairly arbitrary or haphazard way. Fire brigades require applicants to climb a ladder carrying a weight. Other
0470861630_12_cha11.fm Page 237 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
237
employers rely on the company medical check-up, or an ‘eyeball’ test by the supervisor. North American employers use physical tests much more systematically: • The gas industry uses four isometric strength tests—arm, grip, shoulder and lower torso—for jobs with high or medium strength requirements. • Armco Inc. uses a battery of physical work sample tests for steelworks labourers, and has extensive data on norms and gender differences. • Many AT&T engineering jobs require workers to climb telegraph poles; the company has developed a battery of three tests: balance, static strength (pulling on a rope), and higher body density (less fat, more muscle).
Dimensions of Physical Ability Many different measures of physical ability can be used, but statistical analysis finds they are all quite closely related, so we simplify our account of physical ability considerably. Fleishman concluded that there are nine factors underlying physical proficiency (Table 11.1), and developed his Physical Abilities Analysis system, a profile of the physical abilities needed for a job (Fleishman and Mumford, 1991). Hogan (1991) suggests an even simpler model, with only three main factors: strength, endurance and movement quality. Movement quality is mostly only relevant in sport so we have a two-factor model for selection. Table 11.1
Nine factors in human physical ability
Factor
Description
Examples
Dynamic strength
Ability to exert muscular force repeatedly or continuously Ability to exert muscular force repeatedly or continuously using trunk or abdominal muscles The force an individual can exert against external objects, for a brief period Ability to expend a maximum of energy in one act or a series of acts Ability to flex or extend trunk and back muscles as far as possible in any direction Ability to flex or extend trunk and back repeatedly Also known as agility
Push-ups, climbing a rope
Trunk strength Static strength Explosive strength Extent flexibility Dynamic flexibility Gross body coordination Balance Stamina
Leg-lifts, sit-ups Lifting heavy objects, pulling heavy equipment Long jump, high jump, 50 metre race Continual bending, reaching, stretching Continual bending, reaching, stretching
Ability to stand or walk on narrow ledges Cardiovascular endurance, the Long-distance running ability to make prolonged, maximum exertion
0470861630_12_cha11.fm Page 238 Thursday, December 2, 2004 8:33 PM
238
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Validity AT&T’s battery of pole-climbing tests identified people who were more likely to last in the job for at least six months. In the US Navy three physical tests—one-mile run, sit and reach test, and arm ergometer muscle endurance—reduced considerably a very high dropout rate in underwater bomb disposal training. Overall, physical tests achieve moderately good validity, for jobs which really need physical ability (around 0.32). Some jobs turn out not to seem to need physical abilities, despite being generally supposed to: the classic example is police work, where attempts to prove physical tests valid sometimes end in failure. Physical tests also predict whether employees can cope with the job without finding it too demanding or suffering injury. The greater the discrepancy between a worker’s strength and the physical demands of the job, the more likely the worker is to suffer a back injury—a notorious source of lost output in industry. The relation is continuous and linear and does not have a threshold, so an employer who wants to minimise the risk of back injury should choose the strongest applicant, other things being equal.
Adverse Impact Physical tests create very substantial adverse impact problems: • Women do less well on average on many physical tasks. Many sports have separate competitions for men and women for this reason. The male–female difference in some aspects of physical ability is very large. • In North America there are average differences between some ethnic groups in physical size and strength. • Physical requirements tend to exclude disabled applicants. A 1999 review of fair employment cases in the USA found physical tests featured three or four times as often as would be expected from their estimated frequency of use (Terpstra etal., 1999). Hence physical tests need to be carefully chosen and validated. Requiring applicants for police work to run a mile was ruled unfair because job analysis revealed most police pursuits on foot were only for short distances. An employer who uses physical tests needs therefore to be more systematic than in past times. Calvin Hoffman (1999) describes a careful programme of physical testing within the American gas industry. Key jobs such as construction crew assistant were used to show that isometric strength tests were valid predictors of the ability to do work involving carrying 80 lb bags of sand or wrestling with rusted up bolts. This allows the employer to use these tests, even though they create adverse impact. The employer cannot these days simply say it is ‘obvious’ construction crew need to be strong; the law requires the ‘obvious’ to be proved. Many gas industry jobs have too few people doing them for conventional validation to be possible, so Hoffman used the Position Analysis Questionnaire (described in Chapter 7) to group jobs into construction, customer service, clerical etc., and to indicate the likely physical requirements of each group. Construction and warehousing jobs had high physical requirements; customer service was medium; clerical was
0470861630_12_cha11.fm Page 239 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
239
low, etc. Included in the group were ‘marker’ jobs such as construction crew assistant. This enabled Hoffman to argue ‘this job is similar in PAQ profile to the construction assistant job, for which we have a validity study, so it is reasonable to expect physical tests to be valid for this post too’. Hoffman recommends that employers do not rely solely on PAQ but have the physical requirements of each job reviewed by in-house job analysts and experienced supervisors. Disability discrimination laws also make it essential to consider very carefully whether a job really requires particular physical strengths. Fleishman’s Physical Abilities Analysis comes in very useful here, because it enables the employers to draw up a detailed profile of the physical demands of each job, against which applicants, included disabled people, can be assessed.
Recommendation: If you need to use tests of physical ability, they should be based on a detailed job analysis.
Height ‘Common sense’ says police officers need to be big, to overcome violent offenders and command respect. British police forces still specify minimum heights. American police forces used to set minimum heights, but have been challenged frequently under fair employment legislation. Women are less tall on average than men, and so are some ethnic minorities in the USA, so minimum height tests create adverse impact by excluding many women and some minorities. Therefore minimum height tests must be proved to be job related. ‘Common sense’ is surprised to learn that American research has been unable to demonstrate a link between height and any criterion of effectiveness in police officers, so height requirements cannot be justified legally and have been abandoned.
Dexterity Dexterity is needed for assembly work, which is generally semi-skilled or unskilled. It is also needed for some professional jobs, notably dentistry and surgery. Dexterity can be divided into • arm and hand or gross dexterity; • finger and wrist or fine dexterity. Summarising reviews report moderate validity of dexterity tests for vehicle operation, trades and crafts, and industrial work (Ghiselli, 1966). Re-analysis of the extensive General Aptitude Test Battery database in the USA shows that the simpler the job, the more important dexterity becomes (and the less important becomes general mental ability) (Hartigan and Wigdor, 1989). At the simplest level are jobs where only dexterity matters, and general mental ability does not seem to be needed: cannery
0470861630_12_cha11.fm Page 240 Thursday, December 2, 2004 8:33 PM
240
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
workers, shrimp pickers or cornhusking machine operators. This sort of job is often easy to mechanise. Dexterity can be assessed in various ways: • many work sample and trainability tests assess dexterity; • stand-alone dexterity tests including the Stromberg dexterity test; • the General Aptitude Test Battery includes both gross and fine dexterity tests.
Appearance and Attractiveness Extensive research on physical attractiveness finds there is a broad consensus about who is and is not attractive. Research on interviewing and appraisal (Chapters 9 and 14) shows that appearance and attractiveness often affect selectors’ decisions. But is appearance or attractiveness a legitimate part of the person specification for many jobs? For acting and modelling, certainly. Appearance or attractiveness is often an implicit requirement for receptionists; many advertisements specify ‘smart appearance’, ‘pleasant manner’ etc. Appearance, shading into ‘charisma’, is probably also important for selling, persuading and influencing jobs.
DRUG USE TESTING The 1990 National Household Survey in the USA reports that 7% of adult employees use illegal drugs, while 6.8% drink alcohol heavily (Gfroerer et al., 1990). In the USA testing applicants for illegal drug use—marijuana, cocaine, heroin—is popular, and controversial. Alcohol testing is also used in the USA for pre-employment testing of truck drivers. Surveys of US companies produce estimates of just over 40% employers using drug testing. Drug testing can be done in various ways: • • • •
chemical analysis of urine samples, which is the most widely used method; paper and pencil self-report measures; coordination tests to detect impairment; analysis of hair samples.
Hair samples have advantages: they are easy to collect, transport and store, and they can reveal drug use over a period of months, making it more difficult to evade the test by not using drugs for a short period. Often a two-stage testing is done. A quick and cheap test is used to screen everyone, then those who test positive are re-examined using a more accurate (and expensive) test.
Validity Research on drug testing tends to focus on absence, turnover and accidents rather than actual work performance. Two separate very large studies in the US Postal
0470861630_12_cha11.fm Page 241 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
241
Service found that drug users were more likely to be absent, or to suffer involuntary turnover (Normand et al., 1990; Zwerling et al., 1990). The implication is that not employing drug users will increase productivity. The Postal Service studies, and other research, found the relationship with accidents less clear. Another study found drug users more likely to get into arguments with colleagues at work. The Postal Service studies gave conflicting results on the issue of whether different drugs have different effects; one found marijuana more closely linked with problems at work than cocaine, but the other found the opposite. One study, however, reported that marijuana users are paid more, suggesting they work harder, and suggests this may happen because marijuana reduces stress. Critics argue that the link between drug use and work performance is extremely tenuous. In correlational terms, the relationships found in the US Postal Service are very small (0.08 at best), and account for tiny amounts of the variation in performance (less than a tenth of one per cent). Critics argue that this is nowhere near enough to justify the intrusion on the applicant’s privacy and civil rights. Both Postal Service studies report cost–benefit analyses, based on reduced absenteeism, and put a value on drug testing of four or five million dollars a year, albeit spread across the entire vast American postal workforce. Other critics suggest many employers adopt drug testing programmes not to increase productivity, but to retain an image of control, or project an image of corporate responsibility and concern for social problems, or to please politicians.
Mediation Critics have also argued that the link between drug use and work performance may be mediated by some third factor. The link may be general deviance; people who do not wish to fit into American society use drugs, and behave differently at work, but do not necessarily behave differently at work because they use drugs. On this argument, screening for drug use is a convenient ploy for employers who do not want to employ ‘dropouts’. People have even suggested that drug testing may sometimes be used as a way to avoid employing ethnic minorities. Drug use testing is legal in the USA, although there is the risk of detecting drugs taken for legitimate medical reasons, in which case refusing employment might violate the Americans with Disabilities Act. Research on acceptability of drug testing gives conflicting results. Acceptability depends on perceptions of danger, so people see it as fair for surgeons, police officers or airline pilots, but not justified for caretakers/ janitors, farmworkers or clerks. Employers might want to ask themselves what image of the organisation is projected by drug use testing. The applicant is presented with a bottle and told to urinate into it. Asked why, he/she is in effect told ‘We think you might be a drug user. But we don’t ask you outright because you’re the sort of person who can’t be trusted to tell the truth about anything’.
Recommendation: Think carefully before introducing drug use testing.
0470861630_12_cha11.fm Page 242 Thursday, December 2, 2004 8:33 PM
242
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
BIOFEEDBACK Most people experience an increase in heartbeat and sweating during times of stress. The response of the body to stress, danger, panic or emotional excitement generally is to prepare for the ‘fight or flight’ reaction. In response to signals from the brain, the electrical conductivity of the skin changes. This change in electro-dermal activity (EDA) can be measured. In physiological terms, biofeedback measures anxiety states, and anxiety is recognised as a personality trait. Personality psychologists talk about trait anxiety and state anxiety. Self-report personality questionnaires will measure trait anxiety and an EDA biofeedback device will measure state (current or instant) anxiety. How can all of this be of use in psychological assessment? For some specialist assessments where a cool head is needed—piloting aircraft, surgery, and sports like archery, shooting, snooker and golf, for example—low levels of state arousal can give the subject an advantage. One of the authors used a biofeedback device, Relax Plus from Ultra Mind (www.ultrasis.com), to some advantage with the British Olympic Target Archery Squad. Using a biofeedback device in training allowed participants to understand their response to stress and condition themselves to remain calm and in control at will.
CASE STUDIES Background Checks Ian Huntley was convicted in 2003 of murdering two young girls he got to know in his job as school caretaker. After his conviction, it turned out that he had a history of sexual assaults, which his employers did not know about. He had ‘interviewed well’, and seemed keen and willing. He gave the ‘right’ answers to questions about correct relations with school pupils. A check on his criminal record was made, but failed for two reasons: 1 He sometimes used a different name, so part of his history was not reported. 2 His ‘history’ of sexual assaults listed eight allegations, but no convictions. Huntley was charged with only one of the eight alleged sex offences, but was not brought to trial, He was not even charged with the other seven allegations of rape or underage sex. He therefore had no criminal record for sexual offences (he had a conviction for burglary). The police force that knew of the allegations did not keep them on record, because they said their understanding of the data protection laws was that unsubstantiated allegations should not be kept on file. The UK Data Protection Agency disagrees with this view. If the allegations of sexual assault had been reported, clearly Huntley would never have got a job as a school caretaker. The headteacher of the school confirmed this.
0470861630_12_cha11.fm Page 243 Thursday, December 2, 2004 8:33 PM
OTHER SPECIAL ASSESSMENT METHODS
243
Some would argue that people should not be denied employment on the strength of allegations, only of convictions. Think of another would-be school caretaker, this time hypothetical. Fred Smith is turned down for the job, because a check with the police reveals allegations of suspicious behaviour around a children’s playground. The allegations were not definite enough to bring charges, so they were not tested in court. It later turns out that the allegations were probably malicious. What would the press say about this?
Physical Tests for the British Army The British Army wants to recruit women into all branches, but also sets demanding physical standards. A programme of test development by Mark Rayson and colleagues (2000) first listed the main generic physical tasks the infantry soldier must be able to master, which include: • single lift, lifting a 44 kg box of ammunition from the ground to the back of a truck, a distance of 1.7 m. • carry, carrying two 20 kg water cans 210 m. • loaded march, covering 13 km in 2 hours carrying a 25 kg pack. These are very demanding: a lot of potential recruits cannot manage them. They also generate very large adverse impact: far more women than men fail to meet the standard required. The job analysis, however, would justify their use, because it has confirmed that the tests measure essential abilities. However, the core tasks could not be used as actual recruitment tests. Some would not be safe: suppose an applicant dropped the ammunition box on his/her foot. Some are not practical: the tests will be used in city centre recruiting offices, while route marches need open country and take hours. Rayson’s programme looked for tests which were safe for applicants and practical to use in recruiting offices, and which would predict the ability to complete the core tasks. They tried tests of aerobic fitness, muscle strength and fat free mass. Some of these ‘secondary’ tests were successful at predicting the core physical task performance, but not all. The single lift was successfully predicted by tests of muscle strength and fat free mass. The loaded march was successfully predicted by aerobic fitness, supplemented by measures of strength and body size. The tests intended to predict the carry were less successful, and often made incorrect predictions of carry performance. However, many of the tests showed differential validity: they worked well for men but not for women. Women’s performance on the core tasks was more likely to be wrongly predicted by the secondary tests, which of course means they cannot be used. The British Army programme shows how devising physical tests that are practical, valid and fair can be difficult.
0470861630_12_cha11.fm Page 244 Thursday, December 2, 2004 8:33 PM
0470861630_13_cha12.fm Page 245 Thursday, December 2, 2004 8:34 PM
CHAPTER 12
Using Assessment to Arrive at a Decision
OVERVIEW • We review assessment methods for their validity, fairness, cost-effectiveness, acceptability, practicality and versatility. • We review ways of integrating information from different asessments. • We describe other ways of showing that assessments are valid. • People vary in the amount of work they do, and in the value of their work to the organisation. • Good selection may improve organisation’s profitability.
INTRODUCTION There are lots of things to assess, and there are lots of ways to assess them. This chapter presents an overview of the selection process, in terms of: • choosing the assessments to make; • integrating the information to make a decision. This chapter also covers cost-effectiveness and alternative approaches to validation.
CHOOSING ASSESSMENT METHODS There are six criteria for judging selection assessments. An assessment should be: • • • •
valid legal, fair cost-effective acceptable
0470861630_13_cha12.fm Page 246 Thursday, December 2, 2004 8:34 PM
246
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• practical • versatile, have wide scope.
Validity Figures 12.1 and 12.2 summarise research on assessment validity. The first summary in Figure 12.1 groups assessment methods conceptually, following the structure of Zero Mental ability General MA Psychomotor tests Job knowledge tests Tests of common sense Tests of emotional intelligence Personality (Low) neuroticism Extraversion Openness Agreeableness Conscientiousness Honesty questionnaire Customer service questionnaire Projective tests Defence mechanism test Information from the applicant Background checks Application sifting by hand Application sifting by computer T&E rating T&E behavioural consistency method Biodata*
Fair
No data
‘Little’
No data No data No data
Information from other people Reference Peer ratings Assessment and development centres AC In-tray test Interviews Unstructured Structured Empirical Miscellaneous Physical test Education Work sample test Trainability test Graphology
Zero
*Uncorrected validity
Figure 12.1 Summary of the validity of different selection tests Source:
Data from Hunter and Hunter (1984).
High
0470861630_13_cha12.fm Page 247 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
247
Validity over 0.50 Could explain 25% or more of variation in performance • structured interview • general mental ability • tests of ‘common sense’ • customer service questionnaires Validity over 0.40 Could explain 16–24% of variation in performance • peer ratings • job knowledge test • T&E behavioural consistency • psychomotor test • honesty questionnaire • empirical interview Validity over 0.30 Could explain 9–15% of variation in performance • work sample test • assessment centre • education • unstructured interview • physical test • biodata* Validity over 0.20 Could explain 4–8% of variation in performance • in-tray test* • reference • personality—conscientiousness • defence mechanism test • trainability test* Validity over 0.10 Could explain 1–3% of variation in performance • T&E rating • personality—(low) neuroticism • personality—extraversion • personality—agreeableness Validity less than 0.10 Cannot explain 1% of variation in performance • personality—openness • projective tests • graphology * Uncorrected validity
Figure 12.2 Summary of the validity of different selection tests, grouped by validity
previous chapters. The second summary in Figure 12.2 groups assessment methods by validity, from high to low. Note that there are no validity data at all for some selection methods, especially those used in the sifting stages, but also emotional intelligence. These methods might work, but we cannot be sure. Figure 12.1 shows that the mere fact of existing and being widely used is no guarantee of a method’s validity. The validities summarised in Figures 12.1 and 12.2 have mostly been corrected for reliability, and restriction of
0470861630_13_cha12.fm Page 248 Thursday, December 2, 2004 8:34 PM
248
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
range, as described in Chapter 2. They represent therefore an estimate of the best each method can achieve under ideal circumstances. The smaller number of validities that have not been corrected represent an underestimate of true validity. Figure 12.1 shows how selection assessments vary in validity very widely, from a high point of 0.53 to a low point of zero. Figure 12.2 also includes estimates of the percentage of the variation in performance predicted by tests in different validity bands. The best tests can account for 25% or more of the variation in workers’ performance, showing they can make a substantial prediction of ability to do the job well. The poorest, however, can account for less than 1%; in other words they predict virtually nothing. Figure 12.3 shows that more promotion assessments achieve high validity: ability tests, work samples, peer ratings, job knowledge tests and assessment centres. Promotion decision ought to be easier because the assessor should have much more information, over a far longer time scale. Personality tests generally predict job proficiency very poorly, but other research— not included in Figures 12.1 and 12.2—shows personality tests can predict other aspects of how people behave at work.
Fairness Fairness means principally adverse impact, but invasion of privacy is also an issue. We have quite extensive data on both in the USA. Richard Arvey (1979) summarised evidence on adverse impact of five different assessments, for four different groups (Table 12.1). Every assessment excludes too many of one protected group or another. More recent analyses of ethnicity differences in the USA have found differences of various sizes on some selection assessments (Figure 12.4). On the other hand, Table 12.1 does suggest discrimination against women should be easier to avoid for most jobs, because bias against women mostly emerges in the interview, where careful practice can prevent it.
Assessment centre
Job knowledge
Peer rating
Mental ability
Work sample 0
0.1
0.2
0.3
0.4
Figure 12.3 Summary of validity of different promotion tests
0.5
0.6
0470861630_13_cha12.fm Page 249 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
249
Table 12.1 Summary of adverse impact of five selection tests on four protected groups Assessment
Protected group
Mental ability tests Work samples Interviews Education Physical tests Source:
African Americans
Females
Over 60 years of age
Disabled
Major None None Major None
None No data Major None Major
Some No data Some Some Possible
Possible No data Some Possible Major
Data from Arvey (1979).
General mental ability Dexterity Clerical speed Mechanical comprehension Job sample/ knowledge Structured interview Conscientiousness Biodata 0
0.2
0.4
0.6
0.8
1
1.2
Figure 12.4 Difference between white and African Americans on some selection assessments (the figures are d scores, representing the difference between the means in terms of standard deviations of the pooled sample) Source:
Data from Bobko and Roth (1999), Schmitt et al. (1996).
A 1999 survey of US Federal court cases found that the most frequently challenged methods are unstructured interviews, mental ability tests and physical tests (Terpstra et al., 1999). By contrast, biodata, personality and honesty tests, work samples and assessment centres have rarely been the subject of court cases. The most recent analysis concludes that two methods widely supposed to avoid adverse impact problems—structured interviews and biodata—do not entirely avoid the problem (Bobko and Roth, 1999). African Americans appear to score lower than white Americans on both these assessments. Adverse impact by some assessments, notably mental ability tests, seems to create much more trouble than adverse impact created by others, such as biodata. Perhaps everyone still supposes that biodata do not create adverse impact. ‘Culture-free’ tests, such as the Raven Progressive Matrices, might be expected to reduce adverse impact, because they were specially written to eliminate cultural differences. Unfortunately, they do not solve the problem at all: they create as much adverse impact as any other ability tests.
0470861630_13_cha12.fm Page 250 Thursday, December 2, 2004 8:34 PM
250
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Adverse Impact in Europe To what extent can we extrapolate from the USA to Europe? With great caution! Taking a more biological view of differences between people, we might argue that if women in the USA are less likely to pass physical strength tests, women in Europe may be less likely to also. If, on the other hand, we take a more ‘cultural’ view, that differences between people or groups of people depend on how they are seen and treated, we might conclude that differences found between whites and African Americans in the USA may not be found between similar groups in Europe. European research indicates: • • • •
some ethnicity differences have been found in some ability tests; some gender differences are found in some ability tests; gender differences are found in many personality questionnaires; ethnicity differences are found in graduate recruitment using interviews and assessment centres.
Summary of the Six Criteria Table 12.2 summarises how different selection methods perform against our six criteria: validity, fairness, cost, acceptability, practicality and versatility. Table 12.2 also indicates what each method can assess, in terms of the nine headings outlined in Chapter 1: knowledge, skill, personality etc. Fairness is largely rated on American experience. Fairness depends mostly on adverse impact, but also involves invasion of privacy. Assessment centres and work samples probably do not cause legal problems, but educational qualifications and mental ability tests most certainly do. As time passes and more issues emerge, assessments tend to acquire poorer ratings under this heading. For example, personality tests could once be listed as ‘no problem’, but are now listed as ‘some doubts’ because the American Soroka case has raised, but not settled, the issue of invasion of privacy. Recent analyses have challenged the previously accepted view that structured interviews and biodata do not cause adverse impact. Cost tends to be accorded too much weight by selectors. Cost is rarely an important consideration, so long as the assessment is valid. A valid test, even the most elaborate and expensive, is almost always worth using. Cost–benefit analyses, discussed later in this chapter, confirm this. • Interview costs are given as medium/low, because interviews vary. • Structured interview costs are high, because the system has to be tailor made and requires a full job analysis. • Biodata costs are given as high or low: high if the inventory has to be specially written for the employer, but low if a ready-made biodata is used. • The cost of using educational qualifications is given as nil, because the information is routinely collected through application forms. Verification will cost a small amount.
K, WS
K, WS
WS, K WS, K
RWB K, WS MA, K(?) WS, PA K P, RWB (?) SS
Sift (hand)
Sift (computer)
T&E overall T&E behavioural consistency Biodata Education Ability Psychomotor Job knowledge Personality Assessment centre Work sample
Notes: skills.
Medium Medium High High High Variable Medium
K, WS, OF, RWB SS, OF, RWB
SS MA, P, IV
MA, P, IV
No problems
High
No problems
Some doubts Major doubts Major problems Untested Some doubts Some doubts No problems High
High/low Nil Low Low Low Low Very high
Medium Medium
High
Unknown Some doubts
Very low Very low
High
High
Untested (?) Untested (?)
Acceptable High
Practical
High
Low High Low Medium High Low High
High (?) High (?)
High
High
Versatile
High
Moderate
High High High Medium High High Fair
High Medium
Low
High High High Low Low High Fair
High Medium
?
High
High Medium Very limited Very limited
Possible problems High
High
High Very low
Possible problems Limited
Medium/high High
Cost
Unknown Some doubts
Moderate Some doubts High Untested
High
Uncertain
Legal/fair
Criteria
IV, Interests and values; K, knowledge; MA, mental ability; OF, organisational fit; P, personality; PA, physical ability; RWB, required work behaviour; SS, social skills; WS, work
WS, PA
Low High
K, SS, OF
Medium
Structured interview Reference Peer rating
RWB, MA, P, IV RWB, MA, P, IV SS, MA, P, IV K, WS, PA, P, IV SS, OF, RWB, MA, P, IV SS, OF, RWB, MA, P, IV
K, SS, OF
Interview
Valid
Primarily assesses Secondarily assesses
Assessment method
Table 12.2 Summary of different selection tests by six criteria
0470861630_13_cha12.fm Page 251 Thursday, December 2, 2004 8:34 PM
0470861630_13_cha12.fm Page 252 Thursday, December 2, 2004 8:34 PM
252
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Surveys of acceptability indicate that some assessment methods are more popular with applicants than others. Applicants like interviews, work samples and assessment centres, but do not like biodata, peer assessment or personality tests. People like selection methods that are job related, but do not like being assessed on things they cannot change such as personality. Practicality means that the test is not difficult to introduce, because it fits into the selection process easily. • Ability and personality tests are very practical because all applicants can be tested in a group when candidates come for interview. • References are very practical because everyone is used to giving them. • Assessment centres are only fairly practical, because they need a lot of planning, and take up a lot of time. • Peer assessments are highly impractical because they require applicants to spend a long time with each other. • Structured interviews may have limited practicality, because managers may resist the loss of autonomy involved. • Work sample and psychomotor tests have limited practicality, because candidates have to be tested individually, not in groups. Some selection assessments can be used for any job. Others are less versatile. • Work samples and job knowledge tests can only be used where there is a specific body of knowledge or skill to test, which tends to limit them to skilled manual jobs. • Psychomotor tests are only useful for jobs that require dexterity or good motor control. • Peer ratings can probably only be used in uniformed disciplined services. • Assessment centres tend to be restricted to managers, probably on grounds of cost.
Recommendation: For the assessment methods you use, consider whether the general analysis of Table 12.2 apples to your particular circumstances.
Selecting an Assessment Unless an assessment can predict performance at work, there is little point in using it. Taking validity as the overriding consideration, there are eight classes of test with high validity (over 0.35): ability tests, structured interviews, peer ratings, assessment centres, work sample tests, honesty questionnaires, customer service questionnaires and job knowledge tests. Five of these have limited generality; they can only be used with particular groups or for particular purposes: peer ratings, work samples, honesty questionnaires, customer
0470861630_13_cha12.fm Page 253 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
253
service questionnaires and job knowledge tests. This leaves structured interviews, ability tests and assessment centres. • Structured interviews have excellent validity but limited transportability, and are expensive to set up. • Mental ability tests have excellent validity, can be used for all sorts of job, are readily transportable, and are cheap and easy to use, but create major adverse impact problems in the USA, and may create them elsewhere. • Assessment centres have good validity, could be used for most grades of staff and are legally fairly safe, but are difficult to install, and expensive. Other assessments listed in Figures 12.1 and 12.2 have lower validity—but not zero validity. Tests with validities below 0.35 can be very useful, if they are cheap, or if they contribute new information, or if nothing else is suitable. • Personality inventories achieve poor validity for predicting job proficiency, but can prove more useful for predicting honesty, effort, organisational citizenship, leadership and motivation. • References have only moderate validity, but are cheap to use. Legal cautions, however, are tending to limit their value. • Biodata do not achieve quite such good validity as ability tests, and are not as transportable, which makes them more expensive. They can be very useful at the sifting stage. They have proven validity which other sifting methods lack. • Education is cheap to assess (even including the cost of verification), and predicts quite well, in the short term. It does, however, pose major adverse impact problems. • Unstructured interviews have some validity, and serve other useful purposes besides assessment.
Incremental Validity One of several pressing research needs in workplace assessment is learning more about the validity of combinations of tests. We know that personality tests do contribute incremental validity, when used with mental ability tests, whereas tests of mental ability are unlikely to add much to tests of job knowledge. There are, however, many possible combinations of assessments, where we know information about incremental validity. Do reference checks improve on personality inventories? Is there anything to be gained by adding peer ratings to work samples and mental ability tests? What combination of the methods listed in Figures 12.1 and 12.2 will give the best results, and how good will that ‘best’ be? We can make estimates based on the overlaps between tests; if two tests are highly correlated, the second is unlikely to add much. But before using a combination, we really would like actual proof that the second test improves the prediction made by the first.
Trade-Off between Fairness and Validity The choice of test depends most crucially on validity and fairness. Equal opportunities legislation, especially in the USA, has specifically linked these.
0470861630_13_cha12.fm Page 254 Thursday, December 2, 2004 8:34 PM
254
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
No Alternative? The 1970 US equal opportunities Guidelines required employers to prove no alternative test existed that did not create adverse impact before they used valid tests that did create adverse impact. It is not logically possible to prove conclusively that no test exists that is both as valid as the one you are using and creates no adverse impact. Extensive reviews of test validity enable employers to argue that no known test creates less adverse impact for a given level of validity.
Combinations or Composites of Selection Tests Recently American researchers have been trying to solve the adverse impact problem by using combinations of tests, usually a mental ability test and something else. Can a combination be found that will achieve good validity but not create unacceptable levels of adverse impact? Researchers have tried composites of ability and personality tests, with not very promising results: a composite of conscientiousness and mental ability creates just as much adverse impact as ability alone. Actually, we do not need research to demonstrate this. If two groups differ in scores on one test, you can dilute the effect by using a second test that does not produce a difference, but you cannot eliminate it. You could only eliminate adverse impact based on mental ability by finding another measure that produced an equally large difference, but in the opposite direction, so the two differences cancelled each other out. Even if you could find such a measure, your troubles might not be over, because you now have two differences between groups to generate controversy!
INTEGRATING THE INFORMATION Suppose we have used three selection assessments. How do we combine the results of all three to reach a decision? If they all point the same way, we have no problem, and might wonder in future if we need all three. But if they point in different directions, we need a way of integrating the information that will: • • • •
make the best use of the information, i.e. choose the best applicants; allow us to make decisions consistently, on this occasion and on later ones; sound reasonable if we have to justify the decision; not be too complicated to use.
The American armed services have been addressing this issue since World War II, and have generated some impressively sophisticated ways of integrating the masses of information at their disposal. We will limit ourselves to describing some simpler approaches, that do not need tens of thousands of applicants to devise, nor a degree in statistics to understand. There are two main decisions to make: • whether to give every assessment the same weight, or use different weights; • whether to do the assessment in stages or all at once.
0470861630_13_cha12.fm Page 255 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
255
Weights Table 12.3 shows one person’s score for resilience, on three assessments: a personality questionnaire, a role play in which the candidate is put under pressure, and a group exercise, which is very competitive so also requires resilience. All three are scored on a five-point scale. The first column shows the candidate’s ratings. The second column shows ‘unit weight’, where each rating is given the same weight. The candidate gets an overall rating for resilience of 3. Unit weight is of course the same thing as no weighting. The third column shows differential weights. The personality test is only given half weight, perhaps on the argument that it is easy to claim you are resilient when you are not really. The role play is given double weight, on the grounds that it is a behavioural test of resilience, and very focused. This weighting lowers the candidate’s overall rating a lot because he did well on the personality test but poorly on the role play. Where have the weights come from? In this case, the weights have been set by the assessors, representing their view that the role play is a better assessment than the personality questionnaire. The US Navy found it needed a variation on this type of weighting system when selecting sonar operators. Averaging the ratings meant someone could do poorly on pitch perception but well on the intelligence test and get accepted. Unfortunately very intelligent but tone deaf sailors make very poor sonar operators. The Navy found it had to specify a minimum score on the pitch test, and exclude anyone who did not reach it. Recommendation: Ask yourself if there are competences in your organisation where a low score would, or should, rule out an applicant, as being unable to do the job at all.
The fourth column shows differential weights, and gives a very different rating for our candidate. The role play, where he did fairly poorly, now gets very little weight, whereas the group exercise, where he did averagely, gets extra weight. This system gives the candidate a very much better rating. Where have the weights come from? This second set of weights was not set by the assessors. They are based on statistical analysis of the last two years’ functioning of Table 12.3 Using weights in assessment Assessment of resilience by
Rating
Unit weight
Personality test Role play Group exercise Average for resilience
5 1 3
5 1 3 3
Differential weight (1)
Differential weight (2)
(×½) 2½ (×2) 2 (×1) 3 2.5
(×0.91) 4.55 (×0.35) 0.35 (×2.10) 6.3 3.7
0470861630_13_cha12.fm Page 256 Thursday, December 2, 2004 8:34 PM
256
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
the assessment centre, which shows that resilience on the job was best predicted by the group exercise, followed by the personality questionnaire, with the role play tailing in third, not making much contribution to the prediction. Which is best to use? The third method cannot be used unless you have a very large set of previous data to base it on. Even then this method is notorious for giving inconsistent results, and for not actually giving a better prediction than simpler approaches.
Recommendation: Do not try to use statistically based weighting methods.
Some experts have argued that differential weighting has not been shown to add anything to assessment decisions, and that unit weighting, the simplest system, is the best to use.
Deciding in Stages All selection assessments tend to do this. The initial 100 applications are sifted down to 20, then to 5 who are interviewed. In the earlier stages, quicker and cheaper assessments are made. When the numbers are more manageable, longer and more expensive assessments like interviews and ACs are used. If you have a crucial requirement, like the US Navy’s pitch perception in sonar operators, it may be sensible to place it early in the process, to screen out unacceptable applicants. It depends how expensive the crucial test is. Testing pitch perception is probably too time consuming to use on large numbers of applicants.
OTHER WAYS OF ESTABLISHING VALIDITY So far we have discussed only one way of proving that assessments are valid: the correlation between assessment and work performance. The technical name for this is empirical validity. Psychologists prefer this, precisely because it is empirical, or research based. Handwriting characteristics might tell us about work performance; graphologists can explain why they are linked, in theory. But does the theory translate into results: can handwriting predict work performance? The answer from research on graphology is no. The HR department will want the same level of proof: does this way of assessing applicants actually tell us how well they will perform at work? Empirical validation has some disadvantages in practice, however, which are discussed in the following subsections.
Large Numbers Empirical validation needs data from a large number of people doing the job, possibly a larger number than actually exist. Unless the researcher has large numbers to work with, the results may be inconclusive.
0470861630_13_cha12.fm Page 257 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
257
Representative Sampling Equal opportunities agencies demand a representative sample for validation, meaning one with the right proportion of minorities and women. Hence an employer with an all-white and/or all-male work force cannot prove the assessment is valid, making this the ‘Catch 22’ of equal opportunities.
Experience Reviews of US court cases on empirical validity are not very encouraging for employers: • Courts seemed to think that some tests had been completely discredited and could never be valid. • Courts often examined tests item by item even though this is irrelevant when assessing empirical validity. • Courts often ignored or avoided technical issues. • Courts’ decisions were inconsistent and unpredictable. • Less than half of the employers won their cases. Other ways of establishing validity do exist; they differ in terms of how convincing they are, how suitable they are for different sample sizes, and in legal acceptability.
Face Validity Some people are persuaded a test measures dominance if it is called ‘Dominance Test’, or if the questions all concern behaving dominantly. The fact that a test looks as if it assesses dominance does not mean it succeeds. Face validity is important when considering the acceptability of a measure to employer and applicants.
Rational Validity The psychologist uses his/her experience or knowledge of testing to select an appropriate test. Can psychologists do this? Twenty occupational psychologists tried to predict the validity of six tests of the (US) Navy Basic Test Battery for nine navy jobs (Schmidt etal., 1983). Their estimates were compared with actual (but unpublished) empirical validities based on large samples. The pooled judgement of any four experts gave a fairly accurate estimate of validity, as accurate as could be obtained by actually testing a sample of 170 sailors, but the opinion of any single psychologist was not a good guide. Psychologists can choose the right test, but we need to ask four, not one.
Content Validity Experts analyse the job, choose relevant questions and put together the test. Content validation was borrowed from educational testing, where it makes sense to ask if
0470861630_13_cha12.fm Page 258 Thursday, December 2, 2004 8:34 PM
258
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
a test covers the curriculum, and to get the answer from a panel of experts. Content validation regards test items as samples of things applicants need to know, not as signs of what applicants are like. For example, a test for fire fighters might include questions about water pressure achievable at various heights above ground level, or the risk of toxic smoke from different furnishing materials. These are things fire fighters need to know, and which panels of experts can readily agree they need to know. Content validation has several major advantages: • It is very plausible to applicants. • It is very easy to defend in court: every item of the test is clearly relevant to the job. • There is no need to test a large sample (170+) of fire fighters (although the test will need to be trialled to check the items do not create misunderstandings and are not too difficult or too easy). • There is no need to try to define a satisfactory measure of work performance for fire fighters. • There is no risk of failing to find a correlation between test and performance. The test’s validity is ensured in advance by the way it is produced. At first sight these advantages make content validation look the obvious method of choice, but its usefulness is quite limited in practice: • It is only suitable for jobs where there is an agreed set of fairly specific things people need to know, or be able to do. • It will not work for jobs with very varied or intangible content, such as most managerial jobs. • It cannot be used to assess broader characteristics, such as aptitude or personality. • It is often unsuitable for selection because the people tested must already possess detailed specific knowledge or skill(s) (fire fighter tests are used for promotion, not selection). • The test takes time and effort to write. • Because it is job specific, the organisation may need several tests, even dozens. • Hence content-valid testing can be very expensive. • Content-valid tests are often very long, and—according to critics—do not predict any better than short, cheap, off-the-shelf tests of mental ability, which identify people who could quickly learn the necessary knowledge. Content validation requires careful competence analysis, to prove the test is representative of the content of the job. Test content must reflect every aspect of the job, in the correct proportions: if 10% of the job consists of writing reports, report writing must not account for 50% of the test.
Synthetic Validity The employer tells the psychologist ‘I need people who are good with figures, who are sociable and outgoing, and who can type’. The psychologist uses tests of numerical
0470861630_13_cha12.fm Page 259 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
259
ability, extraversion and typing skill, whose validities have been separately proved. The separate validities of the three tests are synthesised to yield a compound validity. Synthetic validation employs two principles: • Competence analysis to identify underlying themes in diverse jobs and to select tests corresponding to each theme. • Once validity has been demonstrated for a combination of competence × test across the work force as a whole, it can be inferred for subsets of employees, including sets too small for conventional empirical validation. Table 12.4 illustrates the method with fictional data. A city employs 1500 persons in 300 different jobs. Some jobs, e.g. local tax clerk, employ enough people to calculate a conventional empirical validity. Other jobs, e.g. refuse collection supervisor, employ too few to make a empirical validity study meaningful. Some jobs employ only one person, so no statistical analysis is possible. Synthetic validation has four steps: 1 Competence analysis identifies the competences underlying all 300 jobs, e.g. ability to influence, numerical ability etc. 2 Suitable tests for each competence are selected. The Assertiveness scale of the NEO Personality Inventory assesses ability to influence. Numerical ability is assessed by the DAT Numerical etc. 3 Validity for each test×competence combination is calculated from all the available employees. For example, there are 430 persons in the work force whose ability to influence is assessed by NEO Assertiveness. The 430 are doing many different jobs. 4 Validity for each test × competence combination, for particular jobs, is inferred from the validity for the total. Validity of NEO Assertiveness for the 25 refuse collection supervisors is inferred from its validity for all 430 who completed NEO Assertiveness. It is even possible to prove the validity of NEO Order scale for the one and only crematorium supervisor, by pooling that person’s data with the 519 others whose work requires attention to detail.
Table 12.4 Illustration of synthetic validation in a municipal work force of 1500 Job
N
Attribute Ability to influence (NEO Assertiveness)
Local tax clerk Refuse collection supervisor Crematorium attendant Total N involved Validity
100 25 1
Attention to detail (NEO Order)
Numerical ability (DAT Numerical)
— X
X X
X X
X
X
—
520 0.25
350 0.27
430 0.30
0470861630_13_cha12.fm Page 260 Thursday, December 2, 2004 8:34 PM
260
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
COST/UTILITY It is fairly easy to calculate the cost of selection, although many employers only think of doing so when asked to introduce new methods; they rarely work out how much existing methods, such as day-long panel interviews, cost. Putting a figure to the benefits of effective selection is more difficult. Orthodox accounting methods were tried but ran into difficulties. Psychologists have devised new techniques for showing how finding and keeping the right staff adds value to the organisation, primarily rational estimate technique. Rational estimate technique argues that people supervising a particular grade of employee ‘have the best opportunities to observe actual performance and output differences between employees on a day-to-day basis’. So the best way to put a value on a good employee is to ask supervisors (Schmidt et al., 1979): Based on your experience with [widget press operators] we would like you to estimate the yearly value to your company of the products and services provided by the average operator. Consider the quality and quantity of output typical of the average operator and the value of this output. . . . in placing a cash value on this output, it may help to consider what the cost would be of having an outside firm provide these products and services.
Supervisors also make estimates for a good operator and for a poor one. Good is defined as someone whose performance is better than 85% of other employees. Poor is defined as someone better than only 15%. A large number of supervisors make rational estimates, and averages are calculated. This gives the HR manager an estimate of the difference in value between an average and a good performer in any given job. For computer programmers employed by the US government, the rational estimate technique found a good programmer worth over $10 000 a year more than an average programmer, and an average programmer $10 000 more than a poor one. The good/average and average/poor differences define range of value. Range of value tells the employer how much the workers’ work varies in value. The smaller range of value is, the less point there is putting a lot of effort and expense into selecting staff, because there is less difference in value between good and poor staff. Frank Schmidt, who devised the technique, proposed a ‘rule of thumb’: the difference in value between good and average employee will be between 40% and 70% of the salary for the job (Schmidt et al., 1979). If salary is £30 000, the difference in value to the organisation between a good employee and an average one will be between £12 000 and £21 000. Estimates of range of value make it very clear that the HR manager can add a great deal of value to the organisation, by finding good employees in the first place, which is what this book is about, as well as by making employees better, through training and development, and by keeping employees working well, through avoiding poor morale, high stress etc. Differences in value of the order of £12 000–21 000 per employee mount up across an organisation. Here are a couple of examples, for the public sector in the USA:
0470861630_13_cha12.fm Page 261 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
261
• A small employer, the Philadelphia police force (5000 employees), could save $18 million a year by using psychological tests to select the best. • A large employer, the US Federal Government (4 million employees), could save $16 billion a year. Or, to reverse the perspective, the US Federal Government is losing $16 billion a year by not using tests.
Calculating the Return on Selection A simple formula for calculating the return on selection was first stated by Brogden in 1946 (Brogden, 1950). For many years it had only academic interest because we had no way of measuring range of value. Brogden’s equation states the amount an employer can save, per employee recruited, per year, is: Validity of test × Calibre of recruits × Range of value Cost of selection – -----------------------------------------------------------------------------Proportion of applicants selected Here is a worked example: • The employer is recruiting in the salary range £50 000 p.a., so range of value can be estimated—by the 40% rule of thumb—at £20 000. • The employer is using a test of mental ability whose validity is 0.50, so r is 0.50. • The people recruited score on average in the top 15% of present employees, so the value for calibre of recruits is 1. This assumes the employer succeeds in recruiting high-calibre people. • The employer uses a consultancy, which charges £800 per candidate. • Of 10 applicants, 4 are appointed, so P is 0.40. The saving per employee per year is £800 (0.50 × 1 × £20 000) – -----------0.40 = £10 000 − £2000 = £8000 Each employee selected is worth £8000 a year more to the employer than one recruited at random. The four employees recruited will be worth in all £32000 more to the employer each year.
Recommendation: See how much of Brogden’s equation you can fill in, for your organistion. Make rough, but realistic, estimates for the five terms.
0470861630_13_cha12.fm Page 262 Thursday, December 2, 2004 8:34 PM
262
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Selection pays off when: • calibre of recruits is high; • range of value is high, i.e. employees differ a lot in worth to the organisation; • selection has high validity. Selection pays off less well when: • recruits are uniformly mediocre; • range of value is low, i.e. employees do not vary much in value; • selection has low validity. Employers should have no difficulty attracting good recruits in periods of high unemployment (unless pay or conditions are poor). Rational estimate research shows that range of value is rarely low. The third condition—zero validity—is all too likely to apply, given that many employers still use very poor selection methods. But if any of the three terms in the left-hand side of the equation is zero, their product—the value of selection—is necessarily zero too. Only the right-hand side of the equation—the cost of selection—is never zero. In the worked example, even using a fairly expensive selection procedure, the cost per employee selected is less than a fifth of the increased value per employee per year. In this example, selection pays for itself six times over in the first year. Failure to select the right employee, by contrast, goes on costing the employer money year after year.
Recommendation: Ask yourself whether selection will prove cost-effective for your organisation, and how it could be made more effective.
Proving Selection Really Adds Value Brogden’s equation calculates the theoretical return on selection, but can we prove organisations really save money by good selection? Analysis of 201 US organisations shows that ones that use good selection practices have higher annual profits, more profit growth and more sales growth (the practices are structured interviews, mental ability tests, biodata and validation of selection) (Terpstra and Rozell, 1993). The relationship is very strong (0.70–0.80) in the service and financial sectors, where calibre of staff is crucial, but insignificant in manufacturing, where capital equipment may be more important. Critics could still argue this shows that better run companies are both more profitable and also use better selection methods, but not that better selection creates greater profitability. For conclusive proof, we need a follow-up study, in which increased profitability follows changes in selection.
0470861630_13_cha12.fm Page 263 Thursday, December 2, 2004 8:34 PM
USING ASSESSMENT TO ARRIVE AT A DECISION
263
CASE STUDY The case study in Chapter 4 follows Newpharm in its quest to appoint a Learning Resources Officer. The HR Manager, John White, has organised the selection centre. All of the selection exercises have been completed and the results entered onto the Selection Panel Assessment Form (Figure 12.5). The Selection Panel Assessment Form data from one assessor, A.B., on the five candidates shows that A.B., rates candidates as follows: 1 2 3 4 5
Alec Newman I.J. K.L. M.N. O.P.
79% 73% 69% 66% 65%
Assessor A.B. has rated candidates under each of the selection criteria out of 10 points. These have been totaled, the weighting has been added and a final percentage for each candidate has been derived. The panel chair, John White, will collect these forms from each of the five assessors. John next enters each panel member’s percentage score for each of the five candidates onto the Panel Group Data Form, totals the data and ranks it. The candidate with the highest percentage is ranked 1 (Figure 12.6). At this stage, what has been measured is the quantitative data on each candidate, in the form of a total percentage score and a ranking. The data is that obtained from all of the assessment methods. There should be a minimum of subjective data.
Using Qualitative Data: Assessors’ Observations, Feelings and Knowledge of What the Job Entails Over the course of the day, each assessor will have been, quite naturally and often in an unconscious way, making judgements about candidates and their suitability for the job. The assessors should know much more about the organisational culture and the context surrounding the job than each candidate. How much attention should we pay to these subjective judgements? Remember, as recommended earlier, assessors should be trained following an induction period and, if given enough opportunity and practice, should be fairly consistent and reliable in their subjective judgement. Ratings between trained assessors from quantitative data (that measured by the assessment methods) and the qualitative, subjective and experiential elements should correlate highly. We feel therefore that we should pay attention now to the whole picture—the job analysis, personal fit, culture of the organisation and job context, ending up with the assessor’s subjective impressions of the qualities and attributes shown by each candidate and best fit for the job. How to collect this important qualitative data is the next question to ask.
Weighting
Weighting
0.9 8 7 7 6 6
Attainments Education Job training Experience
0.5 7 6 7 6 7
Learning Resources officer 10/06/04 INTERVIEW DATE....................................................................
RATINGS OUT OF 10
8 7 5 4 5
Weighting
General intelligence
0.9 8 8 7 5 3
Weighting
Special aptitudes/ capabilities
6 6 6 6 6
Weighting
Interests
0.9 7 6 5 7 7
Weighting
8 8 8 8 8
Weighting
3.2 Score ×100 70 =% 55.2 = 79% 51.2 = 73% 48.2 = 69% 46.2 = 66% 45.2 = 65%
Overall comments on candidate
Disposition/ Circumstances personality Total score + EI weighting
Panel’s assessment/marks (appropriately weighted) under respective criterion.
Physical make-up
Figure 12.5 Selection Panel Assessment Form
8
7
6
2
1
Alec Newman I.J. 3 K.L. 4 M.N. 5 O.P.
Name of candidate
Weighting
Person specification: Selection criteria
CHAIRPERSON.............................................................................
JOB TITLE..................................................................................
SELECTION PANEL ASSESSMENT FORM
John White A.B, C.D, E.F, G.H OTHER PANEL MEMBERS......................................................... ASSESSOR..................................................................................... AB
New pharm.
0470861630_13_cha12.fm Page 264 Thursday, December 2, 2004 8:34 PM
Alec Newman I.J. K.L. M.N. O.P.
79 73 69 66 65
© Dr Barry Cripps Associates & Partner, 2005.
1
72 68 70 62 55
2
70 65 60 55 50
3
69 69 66 60 58
4
70 68 65 66 60
5
JW AB CD EF GH
Panel member (Initials)
Figure 12.6 Panel Group Data Form
10
9
8
7
6
5
4
3
2
1
Name of candidate 6
7
8
9
Enter % from Selection Panel Assessment Form
PANEL GROUP DATA FORM
10
360 343 330 309 288 1 2 3 4 5
PANEL RANK TOTAL
DATE
Alec Newman I.J. K.L. M.N. O.P.
10/06/04
0470861630_13_cha12.fm Page 265 Thursday, December 2, 2004 8:34 PM
2
2 1 0 2 1 0
3
1 0
2
4
0
2 1
5
Score each column in order of preference First choice 2 points Second choice 1 points Third choice 0 points
1 0
2
1
© Dr Barry Cripps Associates & Partner, 2005.
Figure 12.7 Individual Decision Making between 5 Candidates Form
a ....................................................
Alec Newman I.J. b .................................................... K.L. c .................................................... M.N. d .................................................... O.P. e ....................................................
Candidate’s name
Name ..............................................................................
John White
0
2 1
6
1 0
2
7
3 2 1 0
8
11 8 5 3 0
Total
10/06/04
1 2 3 4 5
Rank
Alec Newman I.J. ......................................... K.L. ......................................... M.N. ......................................... O.P. ......................................... .........................................
Name
Date .....................................................
Individual Decision Making between 5 Candidates — qualitative judgements ‘Which candidate in each column is best for the job?’
BARRY CRIPPS ASSOCIATES
0470861630_13_cha12.fm Page 266 Thursday, December 2, 2004 8:34 PM
2
3
4
5
Score each column in order of preference First choice 2 points Second choice 0 points
1
© Dr Barry Cripps Associates & Partner, 2005.
Figure 12.8 Individual Decision Making between 4 Candidates Form
d .....................................................
c .....................................................
b .....................................................
a .....................................................
Candidate’s name
Name .............................................................................................
6
Total
Rank Name
Date
Individual Decision Making between 4 Candidates — qualitative judgements ‘Which candidate in each column is best for the job?’
BARRY CRIPPS ASSOCIATES
0470861630_13_cha12.fm Page 267 Thursday, December 2, 2004 8:34 PM
..............................................................
f .......................................................
© Dr Barry Cripps Associates & Partner, 2005.
Figure 12.9 Individual Decision Making between 6 Candidates Form
Score each column in order of preference First choice 2 points Second choice 1 points Third choice 0 points
..............................................................
Name
e .......................................................
Rank
..............................................................
Total
d .......................................................
6
..............................................................
5
c .......................................................
4
..............................................................
3
b .......................................................
2
..............................................................
1
Date .....................................................
a .......................................................
Candidate’s name
Name ........................................................................................................
Individual Decision Making between 6 Candidates — qualitative judgements ‘Which candidate in each column is best for the job?’
BARRY CRIPPS ASSOCIATES
0470861630_13_cha12.fm Page 268 Thursday, December 2, 2004 8:34 PM
2
3
4
5
Score each column in order of preference First choice 2 points Second choice 1 points Third choice 0 points
1
© Dr Barry Cripps Associates & Partner, 2005.
Figure 12.10 Individual Decision Making between 7 Candidates Form
g ......................................................
f ......................................................
e ......................................................
d ......................................................
c ......................................................
b ......................................................
a ......................................................
Candidate’s name
Name
6
7
Total
Rank
Date
.................................................
.................................................
.................................................
.................................................
.................................................
.................................................
.................................................
Name
Individual Decision Making between 7 Candidates — qualitative judgements ‘Which candidate in each column is best for the job?’
BARRY CRIPPS ASSOCIATES
0470861630_13_cha12.fm Page 269 Thursday, December 2, 2004 8:34 PM
0470861630_13_cha12.fm Page 270 Thursday, December 2, 2004 8:34 PM
270
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
We have constructed a straightforward way to gather each assessor’s subjective judgements about which candidate is best for the job. This is done, once again on a simple form called Individual Decision Making between Candidates—Qualitative Judgements. This form uses a simple statistical routine, circular triads from a Golden Square, where all items are permutated against each other, as follows: abcde bcdea cdeab deabc eabcd Groups of three items (in this case candidates) are listed in columns Figure 12.7. This is known as an incomplete block, in this case three out of five candidates. Each candidate is compared an equal number of times with the others but because the comparison is only between three candidates the assessment process is made more accurate as consistency of choice is continuously being checked. On this simple form, 25 paired comparisons are made between candidates. John distributes the form Individual Decision Making between 5 Candidates asking each assessor to rate candidates on the given scale in answer to the question, ‘Which candidate in each column is best for the job?’. The assessor is asked to make a judgement between three candidates, scoring their first choice two points, second choice one point and third choice zero points. Eight columns are considered; column eight on this form has four choices for reasons of balance. Each row is totalled across left to right, to give a total score for each candidate, which can then easily be ranked.
Final Decision Making Quick analysis of the quantitative data from the Panel Group Data Form and the qualitative data from each Individual Decision Making form should give the panel enough assessment data to be able to make a considered, systematic yet contextualised decision regarding best fit for the job from the five candidates at the selection centre. Individual Decision Making forms are also supplied for selection centres where four, six and seven candidates are to be assessed. Examples of these forms are shown in Figures 12.8–12.10, respectively.
0470861630_14_cha13.fm Page 271 Thursday, December 2, 2004 8:34 PM
CHAPTER 13
Workplace Counselling
OVERVIEW • The rationale for workplace counselling is put and workplace counselling defined. • The MORE model of workplace counselling is proposed and the four techniques defined. • Psychological assessment using testing is recommended as part of the workplace counselling practice. • Each phase of the MORE model is illustrated in depth with a case study application. • Counselling for career development using a combination of MORE and a three-phase technique is illustrated in a case study. • Recommendations for action are made throughout.
INTRODUCTION This chapter is concerned with two applications of counselling in the workplace. The first is the application of counselling skills to help with workplace problems, those difficulties arising at work when people come into conflict with each other and productivity is affected or needs to be improved. The second application of counselling we have chosen is in career development. Occasionally managers may be involved in career development of their people, assessing strengths, areas for development, and career pathways and promotion in the organisation. Many managers might hesitate to become involved in counselling their people and this is quite understandable. They just may not have time, they may feel that counselling is a form of ‘social work’ and not their business, or they may feel that shows of empathy may reduce their position in some way and undermine their authority with regard to the person being counselled. Managers who feel uncertain about counselling may of course delegate and usually someone in the HR function will know who to approach for such a service.
0470861630_14_cha13.fm Page 272 Thursday, December 2, 2004 8:34 PM
272
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The case studies at the end of this chapter will perhaps suggest possibilities for managers to use the skills of counselling, where appropriate, as a way of helping their people move forward, out of difficulty and above all improve productivity.
COUNSELLING FOR WORKPLACE PROBLEMS People using technology, machines and tools run every business. The tools could be those of the builder, e.g. cement mixer, trowel and chisel, or those of the astronaut, space rocket, computers and electronic systems. Whatever technology is used in the business, people drive it. When the machine breaks down it is mended. What happens when the people break down? Most often, nothing, then perhaps, an interview with a supervisor or manager; most uncommonly the person in difficulty is counselled. Michael Reddy (1987) wrote a foundation text which suggests that the offer of counselling in the workplace is both humanitarian and practical. Concern for the person is considered alongside concern for their productivity. In this excellent seminal work, Reddy directs his knowledge, experience and advice in a most practical way to help those supervisors, administrators, department heads, personnel and line managers understand and be able to use counselling skills in the workplace without becoming a psychologist or professional, trained counsellor. We draw a distinction between the expert counsellor whose main work will be dealing with people all the time and the manager who works with people and deals with them on a day to day basis, as well as having their own work to do in operations, finance, marketing, sales, distribution or administration. Because this book is about psychological assessment, we can see that assessment will help in situations where a counselling approach can be used to offer solutions for people at work. Many of the HR processes in this book involving psychological assessment such as selection, assessment centres, testing etc. should show up positively on the bottom line; so should workplace counselling.
COUNSELLING IN PRACTICE Reddy offers a commonsense definition of workplace counselling as: • a set of techniques, skills and attitudes; • to help people manage their own problems; • using their own resources. Defined thus, any mystique surrounding the counselling process should disappear, for the above definition surely is what managers do every day with their people. Counselling is seen as a process blending attitude and skill of both counsellor and counsellee, contextualised round the management of problems affecting work performance, either work or personal, best managed by the person himself or herself, using the counsellor as facilitator. Counsellee is a cumbersome term so we have chosen to adopt the counselling term ‘client’ in this chapter for the employee
0470861630_14_cha13.fm Page 273 Thursday, December 2, 2004 8:34 PM
WORKPLACE COUNSELLING
273
or person who is being counselled. The objective of counselling is to help the client move forward and regain their productivity. The manager as workplace counsellor may act as a consultant, coach, mentor, boss or colleague in order to facilitate personal change in the client. In the workplace, time matters so reasonably speedy solutions are preferred. A simple structure for the whole process from start to finish could be similar to the consultancy cycle: • Moving in: establishing rapport, opening the conversation, setting the rules, establishing trust, confidentiality and covering ethical considerations. • Opening out: listening to the client, outlining, clarifying and defining the problem; understanding the person via psychological assessment, establishing the mechanics of a way forward. • Reframing: reinforcing and maintaining trust, challenging perspectives, shifting ‘old thinking’, working on solutions, identifying ways forward, reframing the problem, seeing a way out. • Ending: rounding off, reaching conclusions, action planning, satisfying original concerns, settling up, resuming productivity and terminating. These phases respond neatly to the acronym MORE for ease of memory.
THE SKILLS OF COUNSELLING Moving In The skills of moving in are crucial to the whole counselling cycle. Every situation, problem, counsellor and client are different. The main skill in moving in is recognizing the problem in the first place. Usually a manager will receive reports from others, that the client is presenting difficulties or suffering in some way. Productivity is usually affected. The client may be confused and defensive, not wanting to admit to a problem. Workplace culture often prevents people from admitting to having a problem at work; admission of a problem such as bullying or harassment is often perceived quite unfairly as a weakness. The initial approach in the moving in phase can be quite casual, a fleeting word in a corridor, a comment in the canteen queue or a moment in the car park—almost like, ‘how’s it going?’. However this brief contact is made, finding the right moment and words to use is important. This initial contact should set up a more formal private meeting, an initial meeting. Moving in, as a process, could be very quick, a meeting in the corridor, setting up a subsequent brief meeting exploring the need for a counselling-type intervention. An initial meeting between manager and employee or counsellor and client should build trust, set the rules, and cover confidentiality and ethics. Once a formal meeting is arranged the opening out process can begin. Opening out is all about rapport, opening a dialogue, communicating and having a relevant, frank and positive two-way conversation, adult to adult. It is at this stage that a psychological assessment by way of a personality inventory can be very useful. Several personality instruments can be of use at this early stage.
0470861630_14_cha13.fm Page 274 Thursday, December 2, 2004 8:34 PM
274
• • • • •
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Myers Briggs Type Indicator/Jungian Type Indicator FIRO Element B Occupational Personality Questionnaire 32 Eysenck Personality Scales California Psychological Inventory.
Recent interest in emotional intelligence has spawned psychometric inventories which are suited to the sort of counselling we are concerned with here, those where a breakdown of social relations occurs. The main use of the personality inventory is to establish a common, non-judgmental framework for discussion during the feedback session. Generally, people are quite happy to talk about themselves and are very interested in the results of personality assessment, so quite naturally during the feedback session the client’s perspective on ‘the problem’ will emerge. Once this vital step has been taken and the problem accepted the counsellor can begin the facilitation.
Recommendation: Choose a natural, ‘teachable moment’, to approach the person who is in difficulty. Acceptance of help will be more forthcoming if an approach is made impromptu rather than at a formal meeting.
Opening Out Listening skills are paramount here. It is vital that the counsellor understands the problem and can see the problem from the client’s point of view. Empathy, being able to take on the perspective of another, is a key attribute for the counsellor and may not come easily to some, particularly under the pressure of working time. We recommend that counselling sessions take place out of work time, at the very least away from any work pressure. The counsellor needs at this time to avoid making any suggestion at all. Just listen and let the client do the talking, suspend all judgement and focus on how the client sees things. Often the client has a very vague idea about how he/she sees the problem and during this first important early stage needs to clarify the problem for himself or herself. Just as the counsellor will be listening, it is important for them to let the client know that they are listening via the skills of ‘active listening’. These skills will be verbal, like asking a probing question—‘when did you say she said that about you?’—or non-verbal, like maintaining eye contact and nodding. A most effective skill is silence! Others may include: • Questioning: ‘when did you say that happened?’ • Incorporating feelings: ‘can you tell me a little more about how you felt at the time?’ • Encouraging: ‘go on . . .’
0470861630_14_cha13.fm Page 275 Thursday, December 2, 2004 8:34 PM
WORKPLACE COUNSELLING
• • • • •
275
Asking for examples: ‘can you give me an example of exactly how he behaved?’ Clarifying: ‘well, so far there have been three occasions when . . . is that right?’ Summarizing: ‘can we check that so far . . .?’ Echoing: ‘. . . she was rude to you?’ Verifying conclusions: ‘so you definitely want to transfer to another department?’
The act of listening should be active; that is, the counsellor should show that they are fully engaged, switched on and actually contributing to this all-important phase of opening out. Give your client your whole attention and ignore time; never look at the clock!
Recommendation: Invite the client to complete an ability or personality inventory at the end of this opening out interview.
Explain that you need to understand more about your client in terms of ability and personality and that the results will be confidential, just between you and them. This will reassure your client that something is going to happen, that you are willing to become involved in the situation and can really help. You should inform the client that the next session will begin with feedback on the personality inventory. In most cases, the client will be extremely relieved that someone is prepared to help them.
Reframing The next phase, reframing, has the objective of drawing the problem out of the client, at a pace that appropriately enables full understanding of the issues. This important phase lays the foundations of moving things forward. Reframing is all about helping the client to change their picture of past events, actions and behaviours. Because the client is usually immersed in the past and struggling to explain the reasons for the behaviour of others, they find it difficult to see the problem in any way other than their initial response that has caused them to become stuck. This response may reveal itself in any one of the emotions, anger, confusion, frustration, fear or just a completely frozen state, the client seemingly frozen as ice. Be prepared for tears. The ice cube analogy is quite useful here. The problem (ice cube) needs to be unfrozen, the old water changed and then re-frozen (enduring solution). So, the problem needs to be reframed: another metaphor is useful, ‘new picture, new frame’. The simple example in the first case study at the end of the chapter will help to explain this reframing process. Recommendation: When counselling, avoid making judgements and seeking solutions too early. Wait until the problem, issue, or difficulty is fully understood by you and your client.
0470861630_14_cha13.fm Page 276 Thursday, December 2, 2004 8:34 PM
276
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Ending The final phase of this brief counselling approach is to draw up an action plan for the client to finally resolve their difficulty. This is the phase of ending. Here the client should be left feeling that they are back in control, that any decisions taken have been theirs and that any future actions to be taken will again be theirs to implement. You, the manager, if the counselling process has gone well, will be left with nothing to do. To use the well-known cliché, it is a win–win situation. You may of course offer a resource, like help from a psychologist or other professional if you know that you are not qualified to follow up completely, but this will depend on the issue. For many difficulties at work a manager will be able to help, as problem solving is what managers do every day in their work, but there may be occasions when they know fully that they are out of their depth, as in the example of Ian in the case study.
Recommendation: The objective of getting out is to leave the client producing as before, motivated, wanting to work, back in control and feeling that they have essentially solved the initial difficulty themselves.
COUNSELLING FOR CAREER DEVELOPMENT Most people in the workplace eventually wish to talk to their manager about career prospects. Usually the conversation will take place in conjunction with an appraisal, but not always so; a request for a talk with the boss about career prospects could happen at any time and is perfectly normal. People are interested in their work and many wish to do well and will seek promotion. Career development is often in the mind of people in their twenties and thirties, the main career building time. Motivation for such a request could be financial, increasing responsibilities outside of work, or, more commonly, professional need for improvement and development. Managers usually take such requests seriously but may wonder how to start such a conversation. The career-counselling route for career development makes for a good structure, and is recommended. Basically, the structure needs to approach the answers to three questions: • Where are you now? • Where do you want to be? • How do you get there? The second case study at the end of the chapter follows this structure with the example of a bright engineer lacking communication skills.
SUMMARY This chapter has dealt with and combined two important methods of counselling in the workplace and demonstrated that psychological assessment by way of testing
0470861630_14_cha13.fm Page 277 Thursday, December 2, 2004 8:34 PM
WORKPLACE COUNSELLING
277
can assist by providing important data on people, which is difficult to obtain as quickly by other means. The suggested counselling techniques responding to the acronym MORE are: • • • •
Moving in Opening out Reframing Ending.
The career counselling/development/promotion techniques use all of the above plus more career-specific phases of counselling, thus: • Where are you now? • Where do you want to be? • How do you get there?
Recommendation: Many counselling interventions in the workplace can be confidently tackled by managers using testing, a combination of MORE and the three-phase approach to career outlined above.
Career, or the pathway through work over time, starts of course with education, training, learning and development in the late teens or early twenties, then entry into organisations, establishment of career, training and achievement. Mid-career may involve more training, leading to promotion into management. Late career usually involves more responsibility and greater earning power. The final career stage is retirement. At all stages psychological assessment can be of use in matching interests, personality, motivation and abilities to the future chosen pathway. All of the above stages are increasingly used within several organisations. The days of ‘a job for life’ have passed; however, the reality of career stages for the individual remains the same, although the stages increasingly occur in jobs in different organisations. Career planning, helped by skilful career counselling and guidance, is becoming an essential part of every manager’s development.
CASE STUDIES Workplace Counselling Joe is in the airframe design team of an aerospace company. He has always been shortsighted and has a magnifying lens over his PC screen. Recently his eyesight has been deteriorating. In his frustration, he has been short tempered, upsetting colleagues and his boss, Ian. At lunch, he rides his bike to and from the canteen carelessly around the site and has caused some ‘pedestrian rage’ by narrowly avoiding workmates walking. His manager, Ian, a very busy design engineer, has had several complaints and recently talked them through with Joe, who appeared confused and aggressive. The
0470861630_14_cha13.fm Page 278 Thursday, December 2, 2004 8:34 PM
278
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
meeting ended unsatisfactorily and Ian could only see a dark picture ahead, a disciplinary, a redundancy suggestion and possibly a tribunal plus the complication of Joe’s disability. Ian was at his wit’s end and with a flash of inspiration called his opposite number, Nick, in HR. Nick and Ian met and because of his counselling experience, Nick listened effectively and empathetically moved Ian away from the dark picture by suggesting that Ian call in an occupational psychologist specialising in eyesight diseases. Ian set up an appointment with the psychologist. The psychologist used a well-known occupational personality inventory that looked, among other attributes, at Joe’s potential for aggression. It appeared that Joe was likely to be unassertive, calm, sociable and empathetic to others. The psychologist, using his expertise with visual handicap, was able to diagnose macular degeneration, a serious but not insoluble condition. Joe was terrified that he was going blind and was most reassured and relieved at the psychologist’s findings. This fear of blindness was so strong that it caused a temporary change in Joe’s personality. A support system was put in place for Joe. His work was modified to reduce pressure and deadlines. Joe’s behaviour and work attitude improved immediately and peer complaints ceased. In fact, both Ian’s and Joe’s pictures had been changed, reframed into a positive future working relationship and considerable stress avoided. Joe now walks around the site having sold his bike! This example, one of ‘changing the picture’, or reframing, shows how easy it is for managers and others to see things from one angle only at times of stress. Joe was confused because he thought he was going blind; Ian thought that Joe was causing difficulties at work because of his aggressive personality. Nick, the HR manager, was delighted to be involved with such a positive outcome. Once the picture had been changed for Joe and Ian because of skilful counselling, it was possible to move forward to a successful conclusion. This is the objective of the reframing phase of counselling at work.
Career Development with Psychometric Assessment Nigel is a very bright engineer in the research and development group of an agricultural engineering business. The company employs around 500 people in its Midlands headquarters. The order book is full and machines are exported worldwide. Nigel has a BA (first class honours) degree in engineering from Oxbridge. For some while he had been thinking of approaching his boss, Head of Research, to ask about promotion prospects. He and his partner had just had their third child and money was getting tight. His work is excellent but his communication skills are poor and he is upsetting peers and other managers by his rather brusque and offhand manner. His manager, Geoff, is becoming concerned by reports from others about Nigel, who is seen as rather bigheaded, opinionated and a bit of a know-all. Others feel it uncomfortable to be around him and he is not fitting in well in the research team, let alone moving ahead to management. At Nigel’s recent appraisal, Geoff found it difficult to communicate with Nigel, particularly around the area of how Nigel communicates with others, clearly a key attribute for a manager.
0470861630_14_cha13.fm Page 279 Thursday, December 2, 2004 8:34 PM
WORKPLACE COUNSELLING
279
Because Geoff has always done his own selection and recruitment he has recently gained the British Psychological Society Certificate of Competence in Occupational Testing, Level A (ability and interest instruments) in order to test the technical ability of applicants to his department. Geoff asked Nigel during a moment in reception as they were both leaving work how things were going. Nigel was rather taken aback but did say that he was not very happy at the moment. Geoff had moved in on Nigel and quickly set up a meeting, next day at 9:00 a.m. The following morning Geoff and Nigel met and quickly established the rules of how Geoff could possibly help Nigel feel better about himself and work. The opening out phase moved quickly, with Geoff letting Nigel know what others in the research team were telling him and with Nigel letting Geoff know how he felt about his colleagues but also how he wanted promotion. At the end of the meeting, Geoff asked Nigel to complete a Critical Reasoning Test Battery measuring numerical, verbal and spatial abilities (Saville and Holdsworth’s Critical Reasoning Test Battery, CRTB). In career counselling terms this move set about asking the question, ‘Where are you now?’. Geoff set Nigel up with the administration of the CRTB at a separate desk in his office, taking about 1.5 hours. Geoff carried on working quietly, with no disturbance. When Nigel had completed the tests, they arranged to meet at the same time on the following day. Geoff quickly scored the tests, drew up the report form and decided to take the data home to prepare his feedback session next day with Nigel. As he was scoring Nigel’s answer forms, he noticed with absolute amazement that out of 160 questions on the complete battery, Nigel had only made two errors. All day Geoff struggled to understand how Nigel could behave as he was doing while being so clever. The next morning at 9:00 a.m., Geoff gave Nigel feedback on his test results. Nigel was extremely surprised that he was in the 99th percentile, implying that 99% of others in his engineering norm group would have scored lower than him. During their discussion, both were able to reframe the situation. With Geoff’s gentle probing Nigel was able to see how he was probably upsetting others by appearing to know it all, coming across as a smart alec and leaving others feeling inadequate, no basis for consideration as a manager. Coupled with a rather stern, detached communication style, non-verbally rather aggressive looking, non-smiling and seriously intense, Nigel was confronted with himself at work for the very first time. He said also that he felt that some of his colleagues were a ‘bit dim’. Geoff suggested that part of the difficulty was Nigel’s high level of intelligence coupled with his non-realisation of how he was coming across to others, which was causing him communication mismatches with his colleagues in the research team. Nigel was able to reframe his position and understand that, paradoxically, his high level of intelligence was getting in the way of normal communication with others even though these skills enabled him to do his job extremely well. Geoff was relieved that Nigel accepted this hypothesis. The meeting ended after 30 minutes and they agreed to meet for 15 minutes at the end of the day. Geoff hoped that this final session, ending, could resolve the issue if he could find someone to work with Nigel on his communication style. Geoff telephoned a management consultancy organisation and found that they had a contact who
0470861630_14_cha13.fm Page 280 Thursday, December 2, 2004 8:34 PM
280
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
could help Nigel on a one-to-one basis. At 5:30 p.m., Nigel accepted the offer of help. In terms of a career counselling or development route for Nigel, Geoff had set up a programme for Nigel to be able to face the next stages of his development, continuing the route suggested above, moving on to, ‘Where do you want to be?’ and ‘How do you get there?’.
0470861630_15_cha14.fm Page 281 Thursday, December 2, 2004 8:34 PM
CHAPTER 14
Performance Appraisal
OVERVIEW • Performance appraisal (PA) serves two main purposes: helping decide on pay and promotion, and guiding training and development. • PA also serves two lesser purposes: providing the data to validate selection assessments, and proving documentation for personnel decisions. • PA variously assesses people in terms of personality, competencies, output, results/objectives or behaviour. • PA can be made by the target’s manager/supervisor, higher manager, colleagues, subordinates, clients or the target him/herself. • In 360° feedback or multi-source PA, employees are appraised by manager, colleagues, subordinates, clients and themselves. • PAs can be made in various formats, mostly using ratings. • PAs tend to suffer from pervasive leniency, being far more frequently favourable than unfavourable. • PAs tend to suffer from halo, where conceptually different aspects of people are rated in the same direction. • PAs tend to be biased by gender, ethnicity and protected minority membership, which may create legal problem for the organisation. • PAs tend to be biased by liking, physical attractiveness, similarity, in-group membership, and the efforts of the target to secure a good rating.
INTRODUCTION Assessment of staff continues after they have been selected, in the form of performance appraisal (PA), meaning the periodic formal assessment of work performance. Most employers in the USA have formal appraisal systems. PA is commoner in larger organisations, and is more usual in middle management than top. The majority of larger British employers also have PA systems. PA has a long history. In Imperial China in the third century AD people were already complaining about it: ‘the Imperial Rater of Nine Grades seldom rates men
0470861630_15_cha14.fm Page 282 Thursday, December 2, 2004 8:34 PM
282
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
according to their merits but always according to his likes and dislikes’. PA became more formal during the nineteenth and early twentieth centuries. The Utopian reformer Robert Owen placed cubes of various colours over workers in his mill according to the excellence, or lack of it, of their work. By World War I the US Army had a fully elaborated PA system: the man-to-man rating system, in which officers were rated for their leadership, intelligence, physical qualities and general value.
PURPOSES OF PERFORMANCE APPRAISAL PA has a range of functions, but most systems serve one of two main purposes, which we can broadly characterise as: • reward and punishment • development.
Reward and Punishment Employees who get good PAs receive pay increases and promotion, or—in hard times—are rewarded by being retained. Employees who get poor PAs do not get pay increases or promotion, and may be at greater risk of being laid off or terminated. They will also have the uncomfortable feeling in many organisations that they are being watched especially closely, and not very sympathetically.
Development Other PA systems are primarily geared to helping people improve their work. Developmental PA is used to: • • • •
identify the employee’s individual strengths and weaknesses; identify the employee’s training and development needs; create a dialogue, establish expectations and avoid misunderstandings; give feedback about performance.
Many PA systems try to fill both roles: reward and punishment, and development. This creates many problems, because the two functions require different sorts of system, at many levels: • Reward-based appraisal needs a global performance rating, to decide who gets promotion, whereas developmental PA needs a differentiated profile of ratings, to identify what Jones is good at, and what Jones could be better at. • Reward-based PA is typically done annually, because that is how often decisions about pay and promotion are made. Developmental PA may need a different
0470861630_15_cha14.fm Page 283 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
283
time scale; we may need to review how well Jones is doing the job every month, perhaps even every week. • Reward-based PAs are best made by the people who decide on pay and promotion: supervisors and managers. For developmental PA, the views of colleagues, subordinates and the appraisee him/herself may be more useful. PA also has some less central roles, which include: • providing information on performance which can be used to validate selection methods; • documenting personnel decisions and safeguarding the organisation against fair employment claims. PA is sometimes used to assess long-term potential, to identify high fliers. This reflects the ‘job for life’ approach of former times, so assessing long-term potential may be less relevant for organisations today. Surveys indicate that assessing long-term potential became less common in the 1990s. Assessing long-term potential was sometimes done as a separate exercise from routine PA, and was sometimes done ‘closed’, i.e. not discussed with the employee.
Recommendation: Be clear what purpose your PA system is intended to serve.
THE CONTENT OF PERFORMANCE APPRAISAL What Should be Appraised? The first key decision when setting up PA is ‘What should we appraise?’. Informal appraisals can be alarmingly vague and subjective (and hard to defend if challenged in court.) Gary Latham gives the example of the manager who said of a subordinate ‘Smith is a loser’ (Latham and Wexley, 1994). The manager probably knew what he meant, and could probably give plenty of examples if asked, but ‘loser’ is too vague, subjective and pejorative to put on someone’s personnel record, where they can these days find it, and object to it. We can distinguish five main approaches to choosing the content of PA: 1 2 3 4 5
Personality: what successful employees are. Competencies: what successful employees can do. Output: what successful employees produce. Results/objectives: what successful employees achieve. Behaviour: what successful employees do.
0470861630_15_cha14.fm Page 284 Thursday, December 2, 2004 8:34 PM
284
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Personality: What Successful Employees Are Many PA systems contain lists of desirable personality traits: confident, cooperative, friendly, bright, right attitude etc. Clive Fletcher (1997) provides an unusual example, used by one UK employer, moral courage. Trait-based PA systems have some advantages; they are easy to generate and are all-purpose; the same list can be used for every grade and type of employee. Their disadvantages include being fairly unhelpful for developmental appraisal; what do you do after you have decided someone is not confident or does not have the right attitude, apart from making them feel resentful and defensive? Trait-based systems are disliked by US courts. In the case of Wade v. Mississippi Co-operative Extension Service in 1974 the court complained about ratings of leadership, appearance, grooming, ethical habits, resourcefulness and loyalty: ‘As may be readily observed, these are traits which are susceptible to partiality and the personal taste, whim, or fancy of the evaluator . . . patently subjective . . . and obviously susceptible to completely subjective treatment’ (Latham and Wexley, 1994). The management guru Peter Drucker goes even further, and sees trait-based PAs as an unwarranted intrusion: An employer has no business with a man’s personality. Employment is a specific contract calling for specific performance, and nothing else. Any attempt of an employer to go beyond this is usurpation. It is immoral as well as illegal intrusion of privacy. It is an abuse of power. An employee owes no ‘loyalty’, he owes no ‘love’, and no ‘attitudes’—he owes performance and nothing else (Drucker, 1973).
However, organisations that need people with general attributes, as opposed to very specific skills, may find some traits useful in PA, especially flexibility or openness to change.
Recommendation: Be wary of PA systems that assess personality traits.
Competencies: What Successful Employees Can Do Competences have a number of advantages. They are usually generic, so a competence framework can be applied to many jobs within the organisation. They are fairer and offer more scope for training; if one’s delegation skills are deficient, one can do something about it. The main disadvantage of the competence approach, in the eyes of psychologists, is vagueness, and over-inclusiveness. What aspects of Smith as an employee, or as a member of the human race, are not covered by the list of things in one definition of competence: ‘motive, trait, skill, aspect of one’s self-image or social role, or a body of knowledge’ (Boyatzis, 1981).
0470861630_15_cha14.fm Page 285 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
285
Output: What Successful Employees Produce Output is closely linked to the organisation’s success; if employees do not produce or sell, the organisation will fail. Output is easy to verify where people produce things, sell things or do something that can be easily counted. The disadvantages of output-based PA are less obvious, but nonetheless real. Output measure can be very unreliable, in the technical psychometric sense. One week’s output of widgets may not correlate much, or at all, with the next week’s; output may reflect all sorts of factors outside the efforts of the individual worker. Sales figures often suffer from the sales area problem. To take a crude example, it is much easier to sell Rolls-Royces in London’s West End or New York’s Park Lane, than it is in Scunthorpe or Allentown, PA. So the salesperson who sells two Rolls-Royces in Scunthorpe may be performing much better than the one who sells six in the West End. But how much better? And can we find a fair and accurate way of measuring how much?
Results/Objectives: What Successful Employees Achieve This is commonly known as management by objectives (MBO). The objectives are agreed between manager and subordinates at one appraisal, and achievement reviewed at the next appraisal. Results-based PA has many advantages. It is objective and easily verifiable. Manager and subordinate decide what shall be accomplished, specify in detail how it is to be accomplished, and specify the time by which it should be accomplished. There is little room for disagreement about success. Results-based PA is closely job related and so easily defensible. Research confirms that setting targets increases employees’ motivation. Above all, results-based PA appeals to the organisation, because ‘results count’. Results-based PA has some disadvantages, however. It is not easy to compare people when everyone has different objectives, so the system may not work well for deciding on pay and promotion. Some sorts of work lend themselves to clear goals—increase sales by 10%, decrease wastage by 20%—but others do not. What goals might sensibly be set for the HR department? Improve morale by 60%? Reduce problem employees by 40%? Unachievable or vague goals may demotivate staff, who see their peers being rewarded for achieving the clear and easy goals set them. Goal setting can sometimes have unwanted bad effects. It may encourage deviousness in employees, who present easy goals as difficult, to secure themselves an easy time. Goal setting may distort the work itself and encourage wrong priorities. The classic example is the UK government’s hospital waiting list target, designed to get surgical operations performed more quickly. Some hospital administrators ‘meet’ their targets by combing through waiting lists to remove everyone who has moved out of the area, or died before ever getting treatment. This meets the narrow target of reducing the list, but not the broader implicit target of treating illness. In some cases setting goals or targets may encourage violation of codes of conduct or the law.
0470861630_15_cha14.fm Page 286 Thursday, December 2, 2004 8:34 PM
286
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
We also have the familiar problem of HR systems being geared to individuals, when work is done in teams. This can result in competition when cooperation is required, and even in sabotage of a rival’s efforts to achieve one’s own targets. Very often organisations have difficulty correctly attributing results to individuals; successes and failures are not clearly the work of one person, but a complex unanalysable result of many people’s activities. When something goes right everyone claims the credit; when something goes wrong everyone runs for cover and blames the other person. Psychologists are not that fond of MBO, perhaps because managers invented it, not psychologists. They see it as essentially uninformative; it does not tell the employee what to do to succeed. This is graphically illustrated in the film The Main Event, where Barbra Streisand plays the role of someone who knows nothing about boxing, but finds herself manager and coach of a not very successful professional boxer. At the end of the first round, she knows exactly what the problem is and how to solve it: ‘he’s hitting you more often than you’re hitting him. In the next round I want you to hit him much more often’. Irritatingly unhelpful advice! An American survey found two out of three employees thought MBO was of little help in planning their training and development (Daley, 1987). MBO is less often used higher up the hierarchy, and more favoured for more junior staff. It seems to be one of those things that managers think are essential for everyone—except themselves.
Recommendation: Be wary of PA systems based on goals or objectives.
Behaviour: What Successful Employees Do This places the emphasis on process, rather than person or outcome. Psychologists favour this approach, because it is more analytic: how does one boxer succeed in landing more punches? If we know that we can plan training, and improve performance. This makes the approach fairer because employees know what to do to improve their work, and get a better PA. The behavioural approach has its share of disadvantages: • It may be hard to identify one person’s performance in interdependent groups, or difficult to identify specific task outcomes. • Sometimes task behaviour is not observable, e.g. door to door sales, police work. • Sometimes it is not even visible: someone sitting at a desk apparently doing nothing might be analysing a problem, or might be day dreaming.
0470861630_15_cha14.fm Page 287 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
287
SOURCES OF PERFORMANCE APPRAISAL Who Should Make the Appraisal? Manager/Supervisor Traditionally PA is done by management; American surveys from the 1970s and 1980s showed hardly any organisations deviated from this. Everyone expects PA to work downwards, because that reinforces the conventional authority structure. The traditional role of the manager is to tell people what to do, check whether they have done it, and to issue rewards and sanctions accordingly. The traditional approach has problems. The manager may in practice see little or nothing of the subordinate’s work. The manager usually has many other tasks and priorities to fill his/her time. The manager may therefore rely on incomplete and second-hand information. Also, the manager may be biased; his/her opinion of a subordinate’s work may be influenced by a whole long list of factors besides the actual quality of the person’s work. Some are idiosyncratic: the manager who does not like people who wear striped shirts, or people who do not sit up straight. This is ‘noise in the system’: it makes PAs less accurate. Other biases can create a major problem, if the manager is biased against women, or older employees, or non-white employees. Bias matters more in conventional ‘downward’ PA because only one person does the PA, so his/her biases are not corrected. Biases are discussed on p. 303.
Higher Level Manager Occasionally PAs are done by someone two or more grades up, the so-called grandfather system. Assessment of long-term potential may be done by career review panels of senior managers. Higher level management define the organisation’s goals, so know what they are, and whether someone’s work contributes to them. On the other hand, higher level management is less likely to know the subordinate’s work, and more likely to base an opinion on reputation, rumour or unrepresentative work: ‘Isn’t Jones the idiot who set fire to his waste paper basket?’ Yes—but that was 15 years ago. Since the 1980s, organisations have started adding the views of other parties to PA. PA that uses a range of inputs is called 360° feedback or multi-source appraisal. The range of people besides the manager/supervisor who can offer a view of someone’s work is indicated in Figure 14.1 and includes: • • • •
colleagues/peers subordinates customers/clients self.
0470861630_15_cha14.fm Page 288 Thursday, December 2, 2004 8:34 PM
288
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE Manager
Self Customers
PERFORMANCE
Peers
Subordinates
Figure 14.1 The various sources of performance appraisal in 360° feedback
Peers or Colleagues These are people of the same status, who are doing the same job. This has a number of real advantages, from the organisation’s point of view. A given employee has usually only one manager, but a number of colleagues. Using multiple sources irons out the idiosyncrasies that can plague conventional PA. It is unlikely all 10 colleagues will share the manager’s dislike of Jones, who parts his hair in the middle or wears a bow tie, and will allow it to colour their view of Jones’s work. It is also more convincing to say to Jones that everyone thinks his memos are too long-winded, not just the manager. Secondly, colleagues may see more of the appraisee’s work; they see what Robinson does when the manager is not there or not looking. In many workplaces, it is much more difficult to hide idleness or carelessness from one’s colleagues than from management. Colleagues can provide a uniquely valuable perspective on someone’s work, perhaps a crucially important one, if Robinson is habitually rude or unhelpful to customers. For these reasons we would expect PA from colleagues to be potentially more accurate. Research has shown peer evaluations are reliable, at two levels. Firstly, what other people think of Smith is consistent over time. Secondly, what people think of Smith tends to stay the same, even if Smith moves to another workplace and is assessed by a different set of colleagues. Peer appraisal has a number of snags, however. The most obvious is acceptability. Employees may not like the idea of being watched and reported on by their colleagues, nor the idea of doing the watching and reporting themselves. A unionised workforce is likely to reject the idea altogether. Peer appraisal is more acceptable to employees if it is ‘developmental’ than if it helps determine pay and promotion. There are subtler drawbacks as well. The first is leniency. All PA systems suffer from this; all PA ratings—whoever makes them—tend to the ‘better’ end of the scale. As one might expect, leniency is greater in peer evaluation, and greater still when colleagues are also friends. The second snag undermines one of the potential advantages of peer appraisal: multiple sources. If 10 people all say Robinson is brusque to customers, that carries more weight than one person’s view—so long as the 10 opinions are independent. They may not be. Ten people may gang up on Robinson and all give the same false account of Robinson’s work, because they do not like Robinson, or because two of
0470861630_15_cha14.fm Page 289 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
289
them do not like Robinson and turn the rest against him. It need not be a deliberate conspiracy. Once the view has got about that Robinson is brusque, Robinson may have to work very hard to overcome this false perception. People gossip about each other at work, and can easily create a view of someone that is not wholly or even partly correct. Or suppose Robinson is the one outstanding talent in a group of ten, will the other nine recognise this and selflessly give Robinson the glowing PA that will secure his promotion over them? They may, or they may all resent Robinson as ‘too clever by half’, and all rate him down. The organisation needs also to consider what sort of atmosphere peer appraisal will create. In 1984, George Orwell spoke of the benefits of having everyone surrounded by spies who know them intimately—benefits to an oppressive totalitarian government no one could escape from. Employers who create an atmosphere like that could find everyone leaves as soon as possible.
Recommendation: Think carefully before extending sources of PA to include colleagues. Before introducing such a system, consult all stakeholders.
Subordinates ‘Upward’ appraisal is used by some organisations, that have included IBM and Ford; it forms part of most 360° systems. Upward appraisal has some of the same advantages as peer appraisal. Multiple sources make it less idiosyncratic, and potentially more accurate. It provides a different perspective on the appraisee’s work that may be uniquely valuable. Subordinates may see things the target’s manager or colleagues cannot see, for example the target’s effectiveness in communicating with staff or ability to create trust. This makes it potentially very useful to the manager’s own development. Subordinate appraisal also conveys messages about the environment the organisation wants to create, and is said to be compatible with employee commitment and involvement models of the workplace. Upward appraisal shares some of the disadvantages of peer appraisal. The risk of employees ‘ganging up’ on a manager is possibly greater. Employees have to be very confident management will not punish those they suspect of giving them poor appraisals. Anonymity is usually essential for upward appraisal, but not necessarily always certain: the manager may have a good idea which one person in a group of ten gave them a poor rating. From the manager’s perspective, upward appraisal usurps his/her traditional authority, and may turn management into a popularity contest. Adrian Furnham thinks 360° has ‘gone off the boil’, and is no longer so popular in the UK (Furnham and Stringfield, 1994).
Recommendation: Think carefully before extending sources of PA to include subordinates. Before introducing such a system, consult all stakeholders.
0470861630_15_cha14.fm Page 290 Thursday, December 2, 2004 8:34 PM
290
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Customers/Clients Appraisal by customers gives a different perspective, which may be crucially important for some organisations. If someone habitually alienates customers, the organisation needs to know, but cannot find out from any other sources.
Self So far we have been asking various other people what they think of Smith’s work. Why not ask Smith him/herself? Self-appraisals form part of some standard supervisor appraisal schemes, and self-appraisal also forms part of 360° systems. Self-appraisal may provide a different and unique perspective: everyone knows how good their own work really is, and how—especially—it could be improved. A whole range of other relatively intangible benefits can be listed as well. Self-appraisal shows trust in staff, and enhances the employee’s dignity and selfrespect; it reduces defensiveness, and enhances self-motivation; it makes the manager a counsellor rather than a judge. Self-appraisal may create self-insight, and increase the employee’s understanding of the need for drawing up their own development plan. The obvious problem with self-appraisal, which occurs immediately to everyone, is will it work in practice? Is it not obvious that people will give themselves high ratings, through self-esteem or self-interest? Quite extensive research indicates that people can give moderately accurate estimates of their own performance and proficiency, but will they in practice? One would need to be very altruistic to refrain from awarding oneself promotion or more money! Research confirms what critics predict; self-appraisals are more favourable—or ‘lenient’—especially if they are used to determine pay or promotion. Interesting gender differences emerge in research on British managers. Self-ratings by female managers are closer to ratings from others, whereas self-ratings diverge more in male managers, suggesting male managers are less honest or less self-critical. Inflation in self-appraisal can be reduced by asking for documentation, or by telling people that self-ratings will be checked against objective criteria. Research at General Electric in the USA asked staff how they thought they were performing, compared to everyone else (Meyer et al., 1965). On average people thought they were doing better than 75% of the rest. This obviously creates major problems when PA is linked to pay; if most employees consider themselves above average, most will expect to get extra pay, and most will be disappointed! The ‘advantage’ of self-appraisal, of providing a unique perspective, turns out also to be a problem, because of something called fundamental attribution error. This means we give different explanations of our own and others’ successes and failures. We explain others’ performance in terms of effort and ability; Smith failed because Smith is stupid or does not try hard enough. But when we explain our own outcomes, we see things differently; our own failings are caused by bad luck, or problems created by other people.
0470861630_15_cha14.fm Page 291 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
291
Recommendation: Use self-ratings in PA with caution.
As Clive Fletcher (1997) notes, ‘people have the capacity to be reasonably accurate in reporting on their own behaviour. Whether they will deploy that capacity is another matter’. Fletcher suggest self-appraisal may be useful in certain circumstances: • when the supervisor sees less of the person appraised; • when appraisal carries no reward implications; • when the person is not being compared with other people.
360°° FEEDBACK OR MULTI-SOURCE APPRAISAL In conventional PA, the employee is appraised from one direction only, from above. In 360° appraisal, the employee is appraised from every angle: from above by supervisor, from sideways by peers, from below by subordinates, and from inside, by him/ herself (Figure 14.1). This was first suggested in the 1960s, and used by Gulf Oil in the 1970s. In the UK pioneers of 360° feedback included the Automobile Association, WH Smith and British Aerospace. It became very popular during the 1990s. Clive Fletcher thinks there are big cultural differences in attitudes to PA, especially in non-traditional forms, so that 360° is more acceptable in the USA than in the UK, and probably more acceptable in the UK than in the rest of Europe. By 1998 90% of companies in the Fortune 1000 were using some form of 360°. In most 360° systems, the various sources (manager, peer, subordinates) are kept separate. Where peers and subordinates make appraisals, there should be at least five persons, to ensure anonymity. PA should give the distribution of ratings as well as their average. Some employers make 360° an option; when they do this, around 60% of employees choose to participate. When 360° first came in, most employers showed the results to the target only, assuming he/she would digest the implications and change his/her behaviour accordingly. Now it is far commoner for the target’s manager and HR to see the results too; 360° is expensive, in time and money, so organisations want to be sure it is acted on. When it first came in, 360° was always a developmental appraisal; now some employers also use it for pay and promotion. The obvious first question about 360° is: do the different sources agree with each other? Analysis of much research gives the answer shown in Figure 14.2: • peer and supervisor evaluations agree fairly well (correlation of 0.62); • self × supervisor and self × peer evaluations agree less well (correlations of 0.35). What other people think of your work seems fairly consistent, but does not agree very well with what you think of it. Recall that you have a unique insight into your work and its quality, but also have perhaps a uniquely strong interest in seeing it as good.
0470861630_15_cha14.fm Page 292 Thursday, December 2, 2004 8:34 PM
292
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE Supervisor
0.62
0.35
Peer
0.36
Self
Figure 14.2 Agreement between self, peer and supervisor ratings of work performance Source:
Data from Harris and Schaubroek (1988).
The second question is: should the different sources agree? If managers, colleagues and subordinates all say exactly the same things about Smith’s work, 360° would be an expensive waste of time. Why ask 20 people when one will do? The justification of 360° is that different sources see different things, so the use of all four sources gives a rounded and complete picture. Walter Borman (1974) offers a conceptual analysis of what different sources are able to see of Smith’s work. Table 14.1 shows that subordinates generally see how the supervisor deals with people, but are much less likely to see how well he/she completes tasks. The supervisor, by contrast, will see the results of the subordinate’s work, but not how he/she achieves it. The third crucial question is: does 360° succeed in being any less biased than traditional supervisor ratings? We do not have an answer to this one yet (although Clive Fletcher’s research, described in the case study, suggests a possible age bias). The fourth question is the simplest: is 360° worth the extra time and money? Does it help the organisation in any way? Does it improve performance, to a greater extent
Table 14.1 What different sources can see of the target person’s performance, distinguishing four aspects: dealing with tasks and with people, what they do and what they achieve Targets can observe
Manager
Peers
Subordinates
Self
Target’s task performance Behaviour Results
Occasionally Frequently
Frequently Frequently
Rarely Occasionally
Always Frequently
Target’s interpersonal behaviour Behaviour Occasionally Results Occasionally
Frequently Frequently
Frequently Frequently
Always Frequently
Source:
Data from Borman (1974).
0470861630_15_cha14.fm Page 293 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
293
than conventional PA or conventional general good management? We do not have an answer to this one either.
PERFORMANCE APPRAISAL FORMATS The next decision we need to make about PA is the format to use: how the opinions will be recorded. Again, we have a range of possibilities to choose from. Early, informal systems used open-ended format: ‘tell me what you think of J. Smith in your own words’. Examples dating from the US Army as early as 1813 included good natured man, and the terse but damning knave despised by all. This is not very useful in large-scale systems, especially ones used to decide pay and promotion. We need something that explicitly compares people on a common standard. Various systems of comparison are used: • • • • • •
checklists rating scales paired comparison, ranking and forced distribution forced-choice formats Behaviourally Anchored Rating Scales (BARS) Behavioural Observation Scale (BOS).
Recommendation: Include some structure in your PA system.
Checklists and Rating Scales Figure 14.3 shows some examples of checklists and rating scales. Rating scales allow more detail than a simple yes/no. Most rating scales have anchors, or labelled points for the rater to check. Some anchors are numerical, others have adjectives or adverbs to indicate frequency. A 5- or 7-point scale is best; if you give more than 7, you tend to find people cannot make reliable distinctions, whether someone’s performance is a ‘9’ or ‘8’ out of 11.
Paired Comparisons, Ranking and Forced Distribution Conventional rating methods suffer badly from leniency: giving everyone good ratings. Many attempts have been made to find appraisal formats that prevent leniency. Paired comparisons requires the appraiser to say which of two employees has better communication skill—rather than take the easy way of saying both are very good. Paired comparison gets laborious with more than half a dozen staff to compare. Ranking requires the appraiser to place all employees in order from best to worst in communications skills.
0470861630_15_cha14.fm Page 294 Thursday, December 2, 2004 8:34 PM
294
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Checklist Punctual
yes
no
Aware of the bottom line
yes
no
Graphic rating scales Well-organised Well-respected
——————————————————— ———————————————————
Badly organised Not respected at all
Anchored Rating Scales Persuasive
always
7—6—5—4—3—2—1
usually
Not persuasive
Helicopter vision sometimes rarely
never
Figure 14.3 Examples of various types of PA rating scales Note:
‘helicopter vision’ means being able to look down on a problem, as if from a helicopter.
Both methods may force the appraiser to distort reality, and say Smith is a better communicator than Jones, when actually the appraiser thinks they are both exactly the same. This makes both methods unpopular with appraisers. Both methods make comparisons within the work force, rather than against a standard. Smith might be the best communicator in the group, but still be fairly poor compared with people in other departments. Ranking tends to make global overall evaluations of staff, which helps decide on pay and promotion but is very much less useful for staff development. Some PA systems use a forced distribution approach (Figure 14.4). For example IBM in the 1980s required appraisers to place 10% in the top category, and 10% in the bottom category. The fortunate 10% in the highest band got a bonus, while the luckless 10% in the lowest band got three months to improve or be terminated. Forced distribution is useful for planning salary budgets, because the size of the top category can be geared to the amount available for increases. On the downside, forced distribution may encourage competition at the expense of teamwork, and can create resentment. Legal problems could also arise, if someone is terminated simply because they were at the bottom of the rank order— because their work might still be satisfactory, or not sufficiently unsatisfactory to justify termination. Psychologists also note that forced distribution systems assume we know the true distribution of performance, i.e. 10% excellent, 20% good, 40% average etc. We do not, and many groups clearly do not conform to a 10/20/40/20/10 distribution.
0470861630_15_cha14.fm Page 295 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
295
Performance
Very good
Good
Average
Poor
Very poor
Percentage of employees
10%
20%
40%
20%
10%
Names of employees
Figure 14.4 Example of a forced distribution PA system
We might have 12 workers, 11 of whom are competent but not outstanding, while one is conspicuously poor.
Forced-Choice Format Forced-choice format was yet another answer to the leniency problem. Figure 14.5 shows some typical forced-choice PA forms. In question 1, answer A scores for time
Forced-choice format appraisal For each line, check one statement, and only one, which best describes the appraisee. 1. [A] completes all paperwork on time
[B] writes clear and accurate memos
2. [A] not committed to organisation’s aims
[B] workspace often messy
3. [A] aware of possible conflict of interest
[B] a good listener
For each set of statements, indicate one only that best describes the appraisee, and one only that least well describes the appraisee. 1. pursues his objectives with ruthless disregard for others 2. always has a kind word for everyone 3. always quick to grasp the point at issue 4. rambles round the point interminably
Figure 14.5 Examples of forced-choice PA formats
0470861630_15_cha14.fm Page 296 Thursday, December 2, 2004 8:34 PM
296
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
management, whereas answer B scores for good communication. Hence the appraiser cannot say the employee is good at everything. Question 1 also illustrates the problem forced-choice creates. Suppose the employee completes all his paperwork on time and writes good memos, or suppose he fails on both counts? Some forced-choice systems try to conceal from the appraiser the themes or competences being appraised, and whether they are giving ‘good’ or ‘poor’ appraisals. A system that is disliked by the people who have to use it rarely proves successful in the long run. Few appraisal systems now use forced-choice formats.
Recommendation: Avoid forced-choice format PA systems.
Behaviourally Anchored Rating Scales The Behaviourally Anchored Rating Scales (BARS) system, invented in the early 1980s, is widely used; Figure 14.6 shows a typical BARS. Behavioural anchors are brief descriptions against the scale points intended to give the appraiser a concrete example of performance at that level. One problem with numerical rating scales is that people use them in different ways: one appraiser might think a particular level of performance is a ‘3’, whereas another, who sees exactly the same performance, considers it a ‘4’. They do not necessarily differ in their opinion of the performance, just about the right number to give it. In Figure 14.6, a very good communicator can explain complex procedures very clearly, whereas a poor one seems oblivious to his/her staff’s concerns. Developing BARS is a lengthy, elaborate and hence expensive process. The main steps are: 1 Experts generate hundreds of examples of good and poor performance (critical incidents). 2 These incidents are sorted into 5–10 main themes, which represent underlying themes in effective work performance, e.g. good communication. This defines the main competences to be appraised. 3 Experts rate each incident for the grade of performance it represents. 4 Experts select incidents to form the behavioural anchors that give the system its name. The first two steps may already have been carried out as part of competence analysis. BARS has a number of advantages: • It uses the appraisers’ (and appraisees’) own concepts and language, because they make up most of the expert panel that devise the system. Hence BARS avoids imposing someone else’s conceptual system on appraisal. • BARS cannot be poorly put together whereas graphic ratings can be. • Good and poor performance are defined in behavioural terms, so people know what to do to improve their work performance.
0470861630_15_cha14.fm Page 297 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
297
• The lengthy construction process saves some time and money by giving those involved some training in the nature and use of BARS. • The same construction process encourages staff to consider what is meant by good or poor work. The chief disadvantage is cost. BARS has to be done afresh for every job or group of jobs, whereas a simple rating of ‘attitude’ or ‘responsibility’ can be used for every job in the organisation.
EFFECTIVE COMMUNICATION 9
Can explain the most complex procedures very clearly
High
8
Listens carefully to any problems that staff have
7
6
Usually tells everyone about changes of plan
5
Sometimes forgets to tell you when he/she wants something
4
3
Never tells people what they should be doing
2
Oblivious to staff’s concerns
1
Low
Figure 14.6 Behaviourally Anchored Rating Scale (BARS)
0470861630_15_cha14.fm Page 298 Thursday, December 2, 2004 8:34 PM
298
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Recommendation: For a ‘deluxe’ PA system, consider using BARS.
Behavioural Observation Scale (BOS) This system was developed in the Canadian logging industry by Gary Latham, and is the main rival to BARS. Figure 14.7 gives a short extract of a much longer BOS PA rating. The principal feature of BOS is that it tries to remove evaluation from PA, and use only reports of the frequency of observed behaviour. The BOS system infers from frequency of behaviour to evaluation. In Figure 14.7, all the items describe how well the appraisee communicates with staff, and were devised, as in BARS, by collecting hundreds of examples of good and poor performance, then sorting them to identify themes. When the BOS is scored, frequencies are converted into evaluations, by saying that certain frequencies, of e.g. not listening to staff, are very poor, other frequencies are poor, while other frequencies are very good. Figure 14.8 gives an example.
Effective communication
(1)
Listens carefully to any problems that staff have almost never
(2)
5
almost always
1
2
3
4
5
almost always
1
2
3
4
5
almost always
1
2
3
4
5
almost always
5
almost always
Explains complex procedures clearly almost never
(6)
4
Tells people clearly when task should be completed by almost never
(5)
3
Shows he/she is aware of staff’s concerns almost never
(4)
2
Tells people clearly what they need to do almost never
(3)
1
1
2
3
4
Remembers to tell everyone about things he/she wants done almost never
1
2
3
4
5
Figure 14.7 Behavioural Observation Scale
almost always
0470861630_15_cha14.fm Page 299 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
Very poor 6–15
299
Poor
Adequate
Good
Very good
16–20
21–24
25–27
28–30
Figure 14.8 Scoring Behavioural Observation Scale
The scale for conversion of frequencies into rating is set partly by the employer, who might want to reserve the highest rating on effective communication for consistently good communication, while broadening the poor rating to cover a much wider range of frequencies of inept communication. The appraiser does not evaluate the employee, and—in theory—does not know what rating the employee will get. BOS has two main disadvantages. First, it is necessarily very much longer than BARS or other rating systems. Every aspect of communication has to be rated, not just overall communication skill. The six scales in Figure 14.7 provide only as much information as the single BARS in Figure 14.6 (but proponents of BOS will argue they provide much higher calibre information). Length is not necessarily a disadvantage; if PA is important, the organisation should allot enough time for appraisers to do it really thoroughly. The second possible snag is more serious. It is not clear whether BOS succeeds in restricting appraisers to describing behaviour. A pervasive problem in PA is ‘working backwards’. The appraiser starts by deciding whether Smith is to get a good appraisal or not. In fact, the appraiser may start one stage further back than that, deciding whether Smith is get a pay rise or not. The appraiser knows that a good appraisal, which will ensure promotion, must not include more than a very low proportion of less than perfect marks. Smith has to get eight ‘5’s on a set of 5-point scales, and no more than two ‘4’s, even if in fact three aspects of Smith’s performance are less than perfect. BOS should prevent appraisers thinking like this—but will it? If they can identify which behaviours are ‘good’, they can rate their frequency as ‘almost always’ and ensure a good overall PA. BOS works best for jobs which can be specified in precise detail. A BOS for a caretaker/janitor can specify every object or surface that needs to be cleaned: floor, wall, ceiling, washbasin etc., then everything that needs to be changed or replenished: soap, towels, toilet rolls etc. It may run to over a hundred scales. BOS may be less suitable for more open-ended jobs, and, crucially, much less suitable for rapidly changing jobs. Some experts think the format of PA is not all that important. Research indicates that format does not affect key questions such as agreement between PA and objective measures of work performance, whether ethnicity differences are found, agreement between sources (self, peers and supervisors) or leniency.
EVALUATING EFFECTIVENESS OF PERFORMANCE APPRAISAL How can we show that PA systems work, or that they are accurate? At first sight it ought to be easy. All we have to do is show that people with higher PAs do better
0470861630_15_cha14.fm Page 300 Thursday, December 2, 2004 8:34 PM
300
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
work. But what do we mean by ‘better work’, and how do we define it? For most work, the conventional measure of how well someone does their job is their performance appraisal, so we have nothing else to compare the PA with. For some jobs, we can find an objective and quantifiable index: units produced or value of sales, for example. The problem with an objective criterion is firstly the lack of one for most jobs. How do you count the output of a general manager? His/her activities are too diverse and intangible to quantify. The other problem is that where you do have an objective countable index of work performance, it agrees very poorly with PA. Several reviews have shown the correlation between output and PA is 0.35 at best, i.e. making every statistical correction you can. There is a link but it is very weak. In practice we should regard PAs and countable output as measuring different things. Some psychologists have suggested that PA identifies ‘satisfactory employees’. A satisfactory employee may be a productive one, but not necessarily; he/she may be someone who has found others ways of ‘satisfying’ the supervisor, by being helpful, agreeable, a ‘good citizen’, or by outright ingratiation. A productive employee who does not do any of these things may get a poor PA, if the supervisor does not look at output, or if there is no output to see. We could also approach the problem though the ‘bottom line’. An organisation with a good PA system should operate more efficiently than one that lacks it, and so should—other things being equal—be more profitable. Therefore comparing one hundred organisations with good PA systems with one hundred with no PA or poor PA systems should show the good PA group are more successful. Comparisons of this type have been reported with selection, but not for appraisal.
ACCURACY IN APPRAISAL The Accurate Appraiser Research has uncovered some characteristics of people who make more accurate appraisals. Most of this research uses the ‘video person’ method, not ‘real’ appraisals. Accuracy of appraisal is assessed used specially prepared films of people doing a job well, averagely or badly. • • • •
People who have experience as assessors in ACs make more accurate PAs. More effective managers make more varied and less lenient PAs. Less well educated appraisers make more lenient PAs. Accurate appraisers are free from self-doubt, high in self-control and pay attention to detail. • Accurate appraisers have higher field independence; they can pick out details clearly and are influenced less by the ‘whole picture’. • Accurate appraisers have higher cognitive complexity: a more elaborated set of concepts for understanding other people. • Accurate appraisers are higher on verbal reasoning and general intelligence.
0470861630_15_cha14.fm Page 301 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
301
Kevin Murphy makes an important distinction between judgement and rating (Murphy and Cleveland, 1995): • Judgement: what appraiser thinks and knows about the employee’s work. • Rating: the actual appraisal the appraiser gives. Why the distinction? Because Murphy thinks appraisers can usually make accurate judgements, but often choose not to give accurate ratings. Much of the research on accuracy in appraisal fails to make this distinction, which makes it irrelevant.
LENIENCY PA notoriously suffers from ‘pervasive leniency’. There are far more ‘above average’ employees than ‘below average’, although logically there ought to be the same number of each. Leniency is a major problem for PA systems, because it undermines virtually any purpose they might serve. If everyone is excellent, we cannot choose who to promote; if everyone is excellent, we have no areas of improvement to suggest.
Recommendation: Calculate average ratings given on your PA system. Calculate an overall average, an average for each competence and an average for each appraiser. Compare these with the mid-point of the scales. Is the average rating on the ‘lenient’ side?
We can easily list some obvious reasons for leniency in PA: • Face-to-face feedback. If the appraiser has to present the appraisal to the employee, face to face, he/she will be more lenient. It is easier to be harsh about some one from a safe distance! • Working relations. Giving someone a poor PA can create resentment and hostility, especially if pay and promotion are involved. Giving several people poor PAs could make the atmosphere in the department very tense, for weeks or even months. • Self-preservation. Hostility can sometimes turn into actual aggression. At least one case of murder motivated by a poor PA has been described in the USA. • Appraisal purpose. Where PAs are done ‘for real’—meaning they will affect pay and promotion—they are generally more lenient. Where ratings are done for research, and will have no impact on the employees, they are generally less lenient. • Supervisor rewards. If supervisors’ pay and promotion are linked to their subordinates’ performance, rating inflation is very likely to occur. Giving your staff low PAs will directly reduce your prospects. • Conformity pressure. If other supervisors all give high ratings, the ones who do not might come under pressure to follow suit.
0470861630_15_cha14.fm Page 302 Thursday, December 2, 2004 8:34 PM
302
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• ‘High-status department’ effect. In the US Navy rating inflation is greater in high-status units, e.g. nuclear submarines, than in low-status units such as supply and tanker fleets, which are sometimes seen as ‘dumping grounds’ for mediocre personnel. High status ‘rubs off’ on the individual; if Smith is in nuclear subs, Smith must be good. Kevin Murphy thinks leniency has a lot to do with rewards and costs. Most organisations do not check the accuracy of PAs, and so do not reward accurate appraisers. The rewards of making accurate but unfavourable appraisals are uncertain. They may be largely theoretical: it helps the organisation improve. The costs, by contrast, are certain and highly visible—resentment, bad feeling, maybe worse. There is nothing to stop the appraiser giving everyone favourable ratings. And as Kevin Murphy has argued, there are lots of pressures on appraisers to do just that. Murphy thinks devising elaborate formats, to try to limit leniency, is misguided effort. The problem is motivation; appraisers can give objective, and sometimes unfavourable, appraisals, if they want to. The problem is they very often do not want to. Murphy has some suggestions for possible ways of reducing leniency: • Reward appraisers for making accurate ratings, by giving them a bonus, or a better PA rating. Murphy notes that he knows of no organisations that consider the accuracy of a manager’s PAs, when it is his/her turn to be appraised. • Use multiple appraisers to diffuse the responsibility for low PAs. • Find some other basis for pay and promotion decisions, and use PA for development only.
Recommendation: If your PA system shows too much leniency, review it. Ask yourself how your system is working, and why you are doing PA.
THE HALO EFFECT Suppose you have a PA system in which employees are rated on the following six aspects of work performance: 1 2 3 4 5 6
analytical ability conscientiousness leadership and persuasiveness coping with stress and pressure being able to get on well with others and be liked by others adaptability and openness to new ideas.
At the end of the first year’s operation of the PA system, you employ a psychologist to calculate correlations between the six. What would you expect him/her to find? There are several possibilities:
0470861630_15_cha14.fm Page 303 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
303
1 The six aspects will be unrelated. People who are open to ideas are not necessarily persuasive. People who are analytical are not necessarily conscientious. 2 Some aspects will be related, but others will not be. People who are good at getting on with others are adaptable, but not necessarily conscientious. 3 The six aspects will be very closely related. The employee who is good at analysis is conscientious, is a good leader, can cope with stress, gets on well others etc. A good employee is good at everything; a poor employee (assuming the PA system identifies any and does not suffer from pervasive leniency) cannot do anything right. Your psychologist will almost certainly find result number 3, that all six PA ratings are highly correlated. This is the halo effect: the good employee is good at everything, is good in every possible way, is good to an extent that is not really very likely. This has been documented in numerous studies on PA. It also tends to happen whenever people are asked to rate two or more aspects of someone’ performance or behaviour. It happens in interview ratings, in ratings in references and in assessment centre ratings. Halo is a big problem in PA for two reasons. First, if your PA system is meant to be developmental, global ratings of ‘how good’ Smith’s work is are not very useful. You want PA to differentiate and tell you which parts of the job Smith could improve on. Second, you do not know if it is really true that ‘good work performance’ is a general characteristic running thorough all employees’ performance, or whether it is an oversimplification imposed on their behaviour by your PA system. Are analytic ability and conscientiousness ‘really’ very closely related, or it just that the appraisers think they are? If your only source of information about analytic ability and conscientiousness is the PA system, you can never answer this very important question. In the list of competences on p. 302, the competences being assessed are the big five and intelligence. As assessed by ability test and personality questionnaire, these six aspects of people are independent; that is, uncorrelated. So they probably ought to be in your PA system too.
Recommendation: Calculate correlations between competence ratings in your PA system. Expect to find a lot of overlap.
SOURCES OF BIAS IN PA The PA is one person’s opinion of another person, and so open to bias. Bias will obviously make a PA system less accurate, and probably less useful. Unfortunately, quite a few types of bias in PAs have been documented. Some types of bias must be taken very seriously by the organisation because they are easy to prove, and covered by fair employment law:
0470861630_15_cha14.fm Page 304 Thursday, December 2, 2004 8:34 PM
304
• • • •
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
gender ethnicity disability age.
PA affects remuneration and employment, so comes under the same laws as selection. For gender and ethnicity, adverse impact analysis is possible. If women or minorities get poorer PAs, there is a presumption of bias which the employer must disprove, which is likely to prove difficult.
Recommendation: Check your PA data for gender and ethnicity differences. If numbers permit, do this check for each appraiser.
Other sources of bias have been documented, which do not present adverse impact problems. However, they are often very visible to employees, and can create great resentment. In American surveys unfair bias in PA is listed as one of the ten biggest problems in HR work. These biases include: • • • •
liking physical attractiveness similarity in-group/out-group.
Gender American research finds no obvious evidence of gender bias in PA. The gender of appraiser or appraisee does not make much difference. In fact there is some evidence that appraisals tend to favour women. On the other hand the glass ceiling problem implies women get poorer PAs, because they do not get promoted so often. The gender stereotype of the job may make a difference. Men doing ‘male’ jobs may get better ratings than men doing ‘female’ jobs. Similarly females in ‘female’ jobs get better PAs. One review suggested supervisors tend to mark down people doing ‘wrong gender’ jobs. The risk of gender bias in PA can be reduced by providing more task-related information, and by the use of clear performance criteria. Employers can usefully check their PA material for possible gender bias. The Xerox Corporation in the USA used a panel of senior female managers to review their PA material; the panel suggested, for example, changing the definition of leadership from ‘intense desire to win’ to intense desire to succeed, as being more gender neutral.
0470861630_15_cha14.fm Page 305 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
305
Recommendation: Get your PA documentation examined for wording that might indicate possible bias.
Critics have argued that research on gender and PA may be unrepresentative of general practice, because only ‘good’ organisations, with enlightened policy and practice, will allow researchers in to study gender and PA.
Ethnicity Analysis of 74 separate American studies found black Americans overall get poorer PA ratings (Kraiger and Ford, 1985). Critics have noted a tricky ambiguity in this finding: is this bias, or are appraisers accurately reporting true poorer performance? Employers would be unwise to assume the latter unless they have very solid proof.
Own-Race Bias Do raters favour their own race? So white appraisers favour white employees, while black appraisers favour black employees etc. The same analysis of 74 studies found small but consistent own-race bias in PA ratings. The bias is not very large statistically, so we might discount it as unimportant, if it were not such a sensitive issue. Own-race bias has also been found in 360° feedback, although there is much less research as yet. Own-race bias was found in black Americans for boss, peer and subordinate ratings, and in white Americans for boss and peer ratings, but not for subordinate ratings.
Recommendation: Record gender and ethnicity of appraiser as well as appraisee.
Age A very important review compared PAs with objective performance data. The review made two sharply contrasting findings: • older employees get poorer PAs • objective indices show older employees are more productive. The two findings provide clear evidence of bias in PAs against older staff, and enable us to rule out the alternative explanation that older employees ‘really’ are poorer workers. The effect was greater for non-professional grades of staff.
0470861630_15_cha14.fm Page 306 Thursday, December 2, 2004 8:34 PM
306
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The US case of EEOC v. Sandia Corporation concerned age discrimination by PA. The employer’s system was considered to be subjective, unvalidated and to have a built-in bias against older employees. Leniency is linked to seniority; senior people get better ratings if they are used for pay and promotion, but not better if the ratings are only done for research. Gender, ethnicity and age effects are easy to document, because gender, ethnicity and age are (more or less) unambiguous, and in many countries must by law be recorded. Other biases are less easy to prove, because the information is less accessible or less visible, but have been documented by research.
Liking Clinton Longenecker and colleagues’ (1987) survey of how managers make PAs found managers generally quite open about allowing liking for employees to colour their PAs. Other research finds the more closely acquainted the appraiser is with the employee, the better the appraisal tends to be, which suggested a link with liking.
Physical Attractiveness Some American research strongly implies PAs are biased by global physical attractiveness (Frieze et al., 1991). This used the American college yearbook to do a ‘reverse follow-up’. The groups were studied when they were mostly aged around 30 and their salary levels noted. They were all graduates from the same university, and had all been photographed ten years earlier for their college yearbooks, using standard format professionally done photographs, which were rated for global physical attractiveness. Beauty may lie in the eye of the beholder, but such ratings achieve a high level of consensus: we can all agree who is good looking and who is not. Good looking people earn more, which implies their PAs are better and have secured them more promotion. It is quite a large effect: the ratings were made on a five-point scale, and each point on the scale is worth $2500 in higher salary, so the most good looking are earning $12 500 more than the least.
Similarity Do managers give better appraisals to people who are similar to them in outlook and interests? There is evidence of a fairly weak bias in terms of actual similarity, but of more for perceived similarity; managers give better PAs to people they think are like them.
0470861630_15_cha14.fm Page 307 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
307
In-Group/Out-Group Bias Many supervisors or managers have an in-group and an out-group in their teams. In-group members are treated as valued colleagues, and get more attention and resources, and the manager adopts a participative style in managing them. The outgroup by contrast are treated more as if they were temporary employees, and the manager adopts a more authoritarian style. In PA, members of the out-group get poorer PAs, but do not do worse on objective performance measures, suggesting a bias at work.
Organisational Citizenship Employees higher in organisational citizenship tend to get better PAs. To some extent this is justified as they are expressing greater willingness to help the organisation and other employees. Recall, however, that citizenship and actual task performance are not that closely linked, so some good citizens are not actually good workers. Some sources of bias in performance appraisal are contributed by the appraisee. The appraisee sets out to secure a good PA by a range of ploys, rather than doing good work. Sometimes appraisees succeed in ‘fooling’ the appraiser into giving them a better rating than they deserve.
Ingratiation We have all observed workplace ingratiation, and have an extensive vocabulary to describe people who engage in it (often using words dictionaries list as ‘not in polite usage’). American work psychologists have devised tests of ingratiation, such as the aptly named MIBOS measure. Research indicates that ingratiation does not necessarily secure better PAs. It may affect the manager’s view of the employee, but not the manager’s PA rating.
Building a Reputation The best way of getting oneself a good reputation at work ought to be doing good work. But we have all met people who have contrived to make themselves well thought of, without actually doing any good work. Mark Cook (1995) analysed some of the ways people do this: • Non-working day. Only about half a manager’s day is spent doing core ‘job description’ tasks; the rest is spent in more nebulous activities, including polishing your image.
0470861630_15_cha14.fm Page 308 Thursday, December 2, 2004 8:34 PM
308
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• ‘It’s who you know, not what you know’. This indicates the importance of ‘networking’, i.e. creating allies and persuading them your work is wonderful. • Attributability. In organisations it is often hard to work out who really is responsible for things happening, so the seeker after a reputation claims credit for successes and avoids blame for failures. • Reorganisations. These often make it easier to claim credit and avoid blame by further obscuring who was responsible for what. • Creating a social reality. Success in a car transmissions factory is fairly clearly defined by how many transmissions it sells. A university’s success is less clearly defined, and moreover largely defined by the university itself; academic staff decide what will be taught and researched on, and what counts as good teaching and research. This again makes it easier to create a reputation, by defining what you do as meritorious. • Making your mark. Reputations are created by highly visible achievements: buildings opened, products launched etc. • Meeting pseudo-targets. The Royal Navy in the nineteenth century typifies this. Promotion went to officers with the shiniest ships. So concerned were they with polish, and so long was it since anyone had fought a battle, that naval officers seemed to have forgotten what warships were for. They avoided gunnery practice in case the powder smoke spoiled their paintwork.
Recommendation: If you have any objective performance data, such as sales figures, compare it with the PA data. Where there is a large discrepancy between the two, ask the appraiser to explain.
CASE STUDY Clive Fletcher describes a British 360° scheme that was not an unqualified success. He noted the tendency of people to adopt 360° enthusiastically, assuming that: • the ratings will not suffer from halo; • ratings will be related to other information about performance; • the PA ratings will really assess the competences they are supposed to. A major British organisation in the oil industry implanted a 360° scheme. The first version used 80 items to assess three competences: • capacity: analytic ability and creative thinking • achievement: drive, enthusiasm and resilience • relationships: people management.
0470861630_15_cha14.fm Page 309 Thursday, December 2, 2004 8:34 PM
PERFORMANCE APPRAISAL
309
Each item was rated on a seven-point scale, form ‘not at all’ to ‘an outstanding extent’. The targets were managers, who were rated by their own managers, peers and subordinates, and by themselves. The first version did not work at all well: • The 360° did not measure the three separate competences. • Only one theme emerged from the ratings: good manager—bad manager—in other words, a classic halo effect. The 80 items only succeeded in measuring one global characteristic. • Self and boss ratings did not agree at all. • Self × peer ratings and self × subordinate ratings agreed but not very well. Fletcher then rewrote the 360°, dropping items that did not seem to be working very well, and got data from another, larger group of managers. The second version worked better but still posed a serious problem. • When the target was being rated by his/her own manager, the older the target, the poorer the rating. • When the target was being rated by his/her peers, the older the target, the better the rating. As Fletcher notes, this seems to indicate that as people get older, they are more inclined to be positive in their judgement of peers, but less satisfied with the performance of subordinates. In other words 360° does possibly exhibit an age bias, and the age bias possibly operates in different directions in different groups.
0470861630_15_cha14.fm Page 310 Thursday, December 2, 2004 8:34 PM
0470861630_16_cha15.fm Page 311 Thursday, December 2, 2004 8:34 PM
CHAPTER 15
Training for Testing and Assessment
OVERVIEW • We describe examples of poor practice in the management of test takers. • The BPS Certification scheme for Competence in Occupational Test has three levels, covering test administration, ability tests and personality tests. • We give advice on test choice, training, and evaluation of training courses and trainers • A case study illustrates how to construct a test policy for an organisation.
INTRODUCTION Why is training necessary to carry out testing and assessment? Consider the testing experience described below: Sarah had applied for a job as Senior Catering Manager in a new recreation centre in her own town, 10 minutes away from her home by car. She was well qualified for the post, having obtained City and Guilds qualifications after leaving school before moving on to further professional qualifications in catering management. Aged 30, she felt that she was at her peak in terms of her personal skills as a chef and her management skills, having run a school kitchen employing seven staff for the past three years. Sarah was sent a brief letter inviting her for an interview. She duly turned up on the day, quite confident and looking forward to the experience. There were seven other candidates, men and women, all gathered in a classroom of a local primary school. The candidates were sitting at school desks in rather uncomfortable, small chairs. The person in charge of the day introduced himself as John, ‘a senior management consultant’. John explained the purpose of the day, saying it would start with psychometric testing, looking at verbal skills, numerical skills, spatial skills and personality. Then there would be an interview. After lunch, each
0470861630_16_cha15.fm Page 312 Thursday, December 2, 2004 8:34 PM
312
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE candidate would be asked to prepare a simple meal and give a 10-minute talk to the panel about ‘Catering in the New Millennium’. The candidates began looking uncomfortable and one woman said, ‘This isn’t for me’, stood up and walked out. (No mention of testing or a presentation had been made in the invitation letter.) John made some rather uncomplimentary remarks and handed over to his assistant, Jenny, for the testing session to begin. Jenny seemed nervous and was very quiet as she handed out pencils, answer books and answer papers. She read instructions from a card very quickly, setting candidates the task of completing examples. When everyone had finished, she asked for any questions. There were none, although Sarah wanted to know if her example questions were right. Jenny then said, ‘Begin’ and the candidates started. Every five minutes Jenny announced the time remaining, rather disturbing Sarah’s concentration. After 20 minutes the exercise stopped, papers were gathered in and the next set given out. The other two tests, numerical and spatial, were very difficult for Sarah. Once again, no explanation of correct example answers was offered. For the personality questionnaire Jenny said, ‘You are going to enjoy this one, no right or wrong answers!’ The testing session finished at 11:00 a.m. The day continued, with interviews and lunch, and then a rather simple meal preparation, cottage pie, took place in the school kitchen. The day concluded with the presentation, ‘Catering in the New Millennium’, which Sarah did not enjoy as she was first to present, allowing minimal time for preparation. Candidates were dismissed at 4:00 p.m. No feedback on the tests was offered. Candidates were told that they would be informed by telephone, then letter, of the result of the selection day.
We hope that such horrendous fictional experiences are not to be found in the real world of assessment. Clearly many errors were made in our example; you could probably count up to fifteen or so. What is clear is that John and Jenny had not been trained in the proper application of psychometric tests, or if they had, they were doubly incompetent.
BPS TEST USER CERTIFICATION Scenarios like the above horror story were only too common before the advent of the BPS Certificates of Competence in Occupational Testing. They could still occur today if test users are not properly trained. This chapter looks at this certificate scheme and makes recommendations, which we hope will encourage good practice. Problems can arise quite innocently in organisations when, due to natural mobility of staff, purchasers of assessment materials change job, materials are left behind ‘in the cupboard’ as it were, a need arises to use them and the all-too-willing user has not been trained. So many factors are extremely important surrounding use of tests in organisations. The early chapters of this book have considered:
0470861630_16_cha15.fm Page 313 Thursday, December 2, 2004 8:34 PM
TRAINING FOR TESTING AND ASSESSMENT
313
• proper administration of tests and the need for standardisation of administration; • choice of tests related to job analysis, job description and person specification; • types of test, their reliability and validity. All of these essential elements of competence are covered in the training leading to award of the certificates. However, much more is covered than is within the scope of this book, particularly around the handling of candidates. In the account above, Sarah was not well handled. Such incompetent handling of test candidates, among other reasons, is why the BPS has written standards of competence to be implemented by assessors on each certificate programme and to encourage potential test users to take training by qualified assessors, all Chartered Psychologist members of the British Psychological Society. Readers are encouraged to visit the BPS website, www.psychtesting.org.uk, and download the following documents: • General Information Pack—Certificate of Competence in Occupational Testing (Level A/B/B+/Full Level B) • Psychological Testing: A Test Taker’s Guide • Psychological Testing: A Test User’s Guide Here you can read about the certification scheme and its background and rationale.
Recommendation: Go to the authoritative source of information in the UK, the British Psychological Society, and explore the resources offered.
TEST ADMINISTRATION Test administration is covered in Chapter 2. A workshop is suggested for training in test administration. However, readers must not assume that anyone can deliver a training course to train managers for the Certificate of Competence in Test Administration, awarded by the BPS. Only assessors whose training programmes and outputs have been verified by BPS Verifiers can train for the award of the TAC. The Society keeps a register of Verified Assessors.
Recommendation: Training in Test Administration is a recommended way to start finding out about the whole world of testing. Contact the BPS as a very first step.
0470861630_16_cha15.fm Page 314 Thursday, December 2, 2004 8:34 PM
314
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
ABILITY TEST INTERPRETATION AND FEEDBACK The Certificate of Competence in Occupational Testing (Level A) offered by the BPS is the next step towards achieving competence in the application of testing in occupational settings. For the award of this certificate, once again, training must be supplied by a Verified Assessor. It is a condition for qualification as an Assessor that they must be Chartered Psychologists, whose materials and candidate outputs have been closely inspected (verified) by the panel of BPS Verifiers. Training will cover the interpretation of ability tests and subsequent oral and written feedback to candidates and client organisations. Test results are interpreted with regard to the appropriate norm group, or comparison group against which the candidates’ scores are compared. Oral and written feedback is essentially factual, reporting candidates’ scores as, say, percentiles, scores below which others in the comparison, norm group will have scored. Other scoring systems may be used, e.g. T scores, stens and grades. These scoring systems have been discussed fully in Chapter 2.
Recommendation: Achievement of the BPS Level A Certificate will give full training in the interpretation and feedback of ability tests.
PERSONALITY TEST INTERPRETATION AND FEEDBACK The analysis, interpretation and feedback of personality tests is very different from ability tests. Because personality data is just that—personal—candidates are usually deeply interested and engaged by their results. From this perspective, i.e. the personal, delegates often regard the Level B Certificate training as more interesting. Although many personality instruments are constructed in a similar way to ability instruments, the training programme leading to award of the BPS Certificate of Competence at Intermediate Level B contains very little mention of statistics. In the experience of the writers as assessors, such avoidance of statistics is much appreciated by Level B delegates! Because interpretation and feedback of personality instruments is more sensitive and ‘personal’ in a way than that of ability tests, the feedback session will be longer and employ skills similar to those discussed in Chapter 13. The key skill will be to let the client do most of the talking in confirming/disconfirming and validating results. By adopting such an ethical approach to feedback, the status and integrity of personality testing is maintained. The Level B programme covers these essential skills in theory and practice. For a fuller discussion on feedback, see Cripps (2004).
Recommendation: Achievement of the BPS Intermediate Level B Certificate will give full training in the interpretation and feedback of personality tests.
0470861630_16_cha15.fm Page 315 Thursday, December 2, 2004 8:34 PM
TRAINING FOR TESTING AND ASSESSMENT
315
PERSONALITY TEST CHOICE AND EVALUATION The choice of personality tests is largely a matter of training, experience, familiarity with the instrument and relevance to the context of test use. The trained user will generally be familiar with several different tests. Most test publishers produce tests of maximum performance (ability tests) and tests of typical performance (personality tests). The reason why training in test use is so important is that managers who are fully trained and therefore competent in the use of occupational tests will be exposed to a whole range of tests during their training. Secondly, by understanding the principles and concepts behind reliability and validity, they will be able to evaluate such tests. There are literally thousands of tests on the market and finding a way through this quantity of tests can be difficult. Thirdly, tests are usually expensive and therefore making a wrong choice can be costly. The BPS Psychological Testing Centre has reviews of tests available on its website. At around £5 each these test reviews are extremely good value and investment makes evaluation quite straightforward.
Recommendation: Visit www.psychtesting.org.uk to view tests for choice and evaluation.
WHO CAN ASSESS AND TRAIN? It will be clear by now that in order to train managers leading to fulfilment of requirements to receive the Certificates of Competence in Occupational Testing awarded by the BPS, Assessors must be Chartered Psychologists whose materials have been verified by the Society Verifiers. The BPS maintains a register of Assessors who meet verification criteria. Most of the test publishers provide acceptable training.
Recommendation: Training for the Certificates of Competence in Occupational Testing, awarded by the BPS, must be delivered by a Chartered Psychologist whose training materials and outputs have been verified by the Society Verifiers.
HOW TO EVALUATE TRAINING COURSES AND TRAINERS A list of training courses on testing and related topics is published in Selection and Development Review. You should check with course organizers that their programme will confirm competence. The BPS does not evaluate training courses;
0470861630_16_cha15.fm Page 316 Thursday, December 2, 2004 8:34 PM
316
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
such evaluation is outside of their remit. The Society does keep a register of test users who have obtained their certificates. Communication with test users on this register should allow you to talk about various providers. There are several considerations in choosing a training course. Location, public courses or ‘in-company’ courses, group or one-to-one training and price are all important.
Recommendation: Visit the BPS website and talk to holders of the Certificates about their experiences during training.
CASE STUDY An Assessment Policy for the New Millennium James Nash is Senior HR Manager in a large data services company employing over 1500 people in three sites in Europe, with headquarters in Birmingham. James is well qualified, MSc in HRM, MBA and Chartered in the CIPD. At 42, James is happy now to stay in post and work towards a seat on the main Board as HR Director. James is about to present a policy statement on assessment testing for recruitment, selection and development for the Board meeting in one month’s time. James has been a test user at Intermediate Level B for some while and has developed a sound working relationship with a test publisher supplying him with ability, personality and motivation questionnaires. He has done most of the work regarding assessment himself, but is finding it increasingly arduous and time consuming, though well worth it in terms of quality of people employed. Since he started in the company ten years ago he has kept all the data on every test administered. By way of justifying test use financially, he recently conducted a utility analysis and was presently surprised that through the use of tests in selection he could show that the bottom line per staff member recruited using tests increased by approximately £5000. In discussion with his Financial Director, he was encouraged to put a case to the Board at their next meeting for an expansion of testing services. At the hub of James’ plans was an idea to delegate more responsibility for implementing his test policy to others in HR via training. He realized that he needed staff on each of the three sites trained to all levels of the BPS Certificates of Competence in Occupational Testing. Thinking in terms of qualifications and continuity, he decided that on each site there should be: • one person trained to Intermediate Level B; • two people trained to Level A; • three people trained as Test Administrators. The process of assessment would work like this:
0470861630_16_cha15.fm Page 317 Thursday, December 2, 2004 8:34 PM
TRAINING FOR TESTING AND ASSESSMENT
317
• The Level B test user on each site would design and oversee the assessment programme, control test materials and manage the Level A test users. This person would give all personality test feedback orally to candidates and report results directly to James for personnel decision making. The Level B test user would write reports on selected candidates for scrutiny and decision making by James. • The Level A test users would manage the test takers, external and internal candidates for selection and give ability test feedback to rejected candidates. • The trained Test Administration Certificate holders would administer tests to all candidates for selection (internal and external), promotion and development purposes. They would score tests giving ability data to the Level A holders and personality data to the Level B holder. • James himself would manage the promotion and development candidates after their data and reports had been written up. • James, after Board permission, would seek tenders from training providers who could train the 18 people to BPS standards of competence. • Every 12 months James would update his utility analysis producing bottom line figures for the Board.
0470861630_16_cha15.fm Page 318 Thursday, December 2, 2004 8:34 PM
0470861630_17_cha16.fm Page 319 Thursday, December 2, 2004 8:34 PM
CHAPTER 16
Professional and Ethical Issues
OVERVIEW • • • •
We present a composite code of practice for assessment. We present a composite code of practice for appraisal. We criticise one or two details of existing codes. We discuss the issues of computer interpretation, shelf life of test data, obtaining consent, intrusion and stress. • We note that ethical standards in assessment change over time.
INTRODUCTION It is not difficult to find examples of ways of assessing people that we would regard as grossly unethical, but which were common practice in our lifetime, or our parents’ lifetime. First, some slightly absurd examples: • A student seeking admission to a US clinical psychology programme arrives for an interview with Dr Smith, and is kept waiting an unreasonably long time. Another person in the waiting room engages him in conversation, about why he is there and how he feels about being kept waiting. Then the other person reveals that he is in fact Dr Smith. The applicant is left trying to recall what he might have said about Dr Smith, while coping with his interview proper. • Shortly after the interview starts, the interviewer pretends to fall asleep. A short while later he ‘wakes up’ and asks the applicant ‘What are you doing here?’. If the applicant says ‘I’ve come for an interview’, the interviewer says ‘I know that. I meant why have you applied here and not to XYZ.’
0470861630_17_cha16.fm Page 320 Thursday, December 2, 2004 8:34 PM
320
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
We wonder what these interviewers were hoping to achieve. Now some more serious examples: • In 1935, the Nazi government passed the Nuremberg laws prohibiting ‘Jewish’ people from being employed in the civil service, and defining as ‘Jewish’ anyone who had three, or in some circumstances, two ‘full-Jewish’ grandparents. Ancestry became a key selection test. • In many parts of the USA until the 1950s married women were not hired as schoolteachers, and female teachers who married were dismissed. In the same era in Australia, women civil servants who married were automatically demoted one grade, and their salary reduced accordingly (Roe, 1956). Business ethics is currently a big issue in the USA. Some large US companies now see that ethical behaviour can be good business; it gives the company a better image and a competitive advantage. Companies with more ethical HR practices have higher corporate reputation indices. This is partly a reaction against the win-at-any-cost mentality. It is also a reaction to growing workforce cynicism. Unethical behaviour causes some people to leave the organisation. Suppose these people are higher in conscientiousness. Chapter 4 shows more conscientious employees tend to be better employees, so unethical employers may be at greater risk of losing their better staff. We can address the issue of ethics at several levels: • The first is straightforward: ‘ethics’ means adhering to codes of conduct stated by professional bodies. We will offer a composite of these, for assessment in general, and testing and appraisal in particular. • Secondly, we consider some issues that concern people in assessment. • Finally, we consider broader issues in ethics, in particular who decides what is ethical, and how they decide.
CODES OF CONDUCT In a code of conduct, we might find it useful to distinguish ethical, practical and legal issues. Advising employers to use reliable and valid tests is primarily practical advice. Giving applicants an unreliable test is a waste of their time as well as yours, but is not really an ethical issue. Nor do we really need to include in our code of ethics a rule against advertisements that exclude minorities or women: the law has taken care of that. A code of conduct is very useful to HR because it enables them to resist pressure from colleagues, possibly of higher status, to engage in poor practice. Our composite code is divided into eight main sections.
1. Planning Assessment • Conduct a job or competence analysis, and use it to choose the selection assessments.
0470861630_17_cha16.fm Page 321 Thursday, December 2, 2004 8:34 PM
PROFESSIONAL AND ETHICAL ISSUES
321
2. Choosing Individual Assessments • Check assessments for likely adverse impact. • Check that the assessment: – is job related; – does not have content which may be less familiar to minorities and women; – does not require better command of English or higher educational qualifications than the job needs; – is not unduly intrusive; – does not cause unnecessary stress to applicants.
3. Choosing the Assessors • Assessors who sift applications should be trained in recognising their ‘generalised assumptions and prejudices’. • Interviewers should be trained. • Anyone administering or interpreting psychological tests should be trained to the appropriate level. • Assessors in Assessment Centres should be trained. • People should not exceed their competence, for example in testing. • Sifting and interviewing should be done by two or more assessors.
4. Before the Assessment • Give applicants the chance to practise the tests where possible. • Obtain applicants’ informed consent. Tell applicants: • how they will be assessed, in reasonable detail, so they are well prepared for the test session; • they should make any special needs known; • what will happen on the basis of the assessment (e.g. they will be offered a job, or not); • how long the results will be kept; • what feedback arrangements have been made; • who will have access to results.
5. Following the Assessment • Store data securely. • Do not allow unauthorised or unqualified persons access to the data. • Keep test and assessment data confidential.
0470861630_17_cha16.fm Page 322 Thursday, December 2, 2004 8:34 PM
322
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Only keep data for the agreed length of time. • Do not use assessment data for any but the agreed purpose. • Maintain comprehensive equal opportunities records.
6. Feedback Following the assessment, all applicants should be offered feedback. The feedback should: • • • • • • •
be clear; be full; cover specific competences; make clear the implications of the assessment for the applicant; be in a style appropriate to the person’s level of understanding; be provided within four weeks of the assessment; be provided face to face if possible, by telephone otherwise.
(Feedback is recommended after psychological tests and assessment centres, but seems to be seen as less essential after other assessments.)
7. Performance Appraisal • Ensure the PA instrument measures the competences employees are being appraised on. • Ensure the PA instrument includes ‘cannot say’ or ‘no information’ options for the appraiser to use. • Appraise on the basis of sufficient information. • Appraise on basis of representative information. • Appraise on the basis of relevant information. • Make an honest appraisal. • Make an unbiased appraisal. • Keep written and oral appraisals consistent. • Present the appraisal as an opinion or hypothesis. • Ensure the person who gives feedback is ‘sensitive’. • Ensure that the organisation acts on training and development recommendations arising from the appraisal. • Ensure that the organisation provides adequate resources to act on T&D recommendations.
8. 360°° Feedback • Use a minimum of 3–5 colleagues and subordinates. • If there are fewer than 3–5, then pool peers and subordinates. • Ensure anonymity of peers and subordinates.
0470861630_17_cha16.fm Page 323 Thursday, December 2, 2004 8:34 PM
PROFESSIONAL AND ETHICAL ISSUES
323
Recommendation: Ensure your organisation has a code of conduct for assessment, and that it is agreed to by all parties.
Questionable Elements in Existing Codes Some Codes contain recommendations that we have some doubts about. For example, the British Psychological Society tells us: • Do not rely solely on psychological tests. This advice is repeated, several times, in the list of competences for people using ability and personality tests. It is usually better to use more than one type of assessment; that is the whole point of the assessment centre method. But why are psychological tests singled out as not to be used in isolation? If you are assessing mental ability, a mental ability test is the best single way to do it: what is the point of supplementing it with a less accurate assessment such as an unstructured interview? The BPS’s Code of Good Practice for Psychological Testing contains the recommendation: • Give due consideration to factors such as gender, ethnicity, age, disability and special needs, educational background and level of ability in using and interpreting the results of tests. What does this mean? Is the BPS suggesting the test user should expect a different, less adequate performance from women, minorities etc.? Presumably not. Or is the BPS suggesting that if the test user is using a personality test that produces gender differences in reported anxiety, the test user should remember this? And why the mention of ‘level of ability’? CRE used to recommend employers not to use tests that create adverse impact, i.e. where minorities score less ‘well’. We noted in Chapter 1 that adverse impact certainly is a problem, and that tests that do not create it are very desirable. However, adverse impact does not mean the assessment cannot be used at all. It will be necessary, as the Commission for Racial Equality notes, to prove the test is valid, which may prove difficult, and will certainly cost time and money. But adverse impact is not in itself a reason to avoid an assessment. The CIPD’s Code on Psychological Testing suggests we should only use tests: • which have been through a rigorous development process with proper regard to psychological knowledge and processes. No less a body than the International Test Commission agrees, and tells us only to use tests of ‘proven quality’. Following this advice might need more expertise that
0470861630_17_cha16.fm Page 324 Thursday, December 2, 2004 8:34 PM
324
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
many test users have, or need, and—more crucially—much more information than many test publishers supply.
INDIVIDUAL ISSUES Computer Testing and Interpretation Computer testing and interpretation presents more an issue of validity than ethics. As the BPS notes, there are a lot of bad computer test interpretation programmes around. It is very easy to generate pages of impressive sounding interpretative output: ‘a high score on scale X means you are outgoing, sociable, lively etc. A high score on X coupled with a low score on Y means you are sociable but tend to be rather bossy and dominating etc.’. Where does all this come from? A research programme on 100 high and 100 low scorers, and how they behave in company and how others see them? Possibly. But more likely from the head of the person who wrote the programme, guided by what the manual says each scale is intended to assess. Quality of software is not primarily an ethical issue but one of validity. Presented with a detailed computer interpretation, ask the author, or yourself if the author is not around, the simple question ‘How do you know?’. As the BPS suggests, the basis of the interpretation given should be stated clearly.
Recommendation: Always ask searching questions about computerised interpretation programmes.
Access to Tests All codes say only qualified person should use psychological tests. In the UK this means people who have the BPS Occupational Testing Certificate, for ability tests or personality tests. Test publishers will only supply tests to people who have been trained in their use. Supplying training is a major source of income for most test publishers.
‘Shelf Life’ of Test Data Codes often recommend that test data should not be kept for longer than 12–24 months. If a further assessment of someone is needed after this time, fresh tests should be done. The implication is the person may have changed. It is not entirely clear why we should think this. Ability test scores are fairly stable over long time spans. Personality profiles also remain fairly similar over periods of some years. This is probably more an issue of perceived fairness, not using old test data and
0470861630_17_cha16.fm Page 325 Thursday, December 2, 2004 8:34 PM
PROFESSIONAL AND ETHICAL ISSUES
325
giving applicants another chance. More specific competences, which might be assessed by assessment centre or interview, may change more quickly, especially if the person has been on a training course, or had a lot of relevant experience.
Consent Applicants should consent to being assessed, especially to completing psychological tests. We have recommended that this consent be obtained twice, in the letter of invitation, and when the applicant arrives to be tested. The applicant is not in a very strong position here: if he/she declines to be tested, will his/her application be turned down? Quite possibly. This suggests some applicants submit themselves to assessments they would prefer not to undertake.
Intrusion/Privacy We can all agree assessment should not be ‘too’ intrusive, or invade the applicant’s privacy to an ‘unacceptable’ extent. We have more difficulty deciding where the boundary should be drawn. Most people would object to prospective employers assessing them using their credit history. Some people object to personality tests or biodata items. No one could reasonably object to being asked to demonstrate their keyboard skills, if needed for the job. But suppose credit history turned out to be a really good way of assessing key competences, such as organising oneself and getting things done on time. How does a credit check differ from a reference request? Both ask for personal information from a third party. We can make the same argument about questionnaire measures. If questions about thoughts and feelings and leisure activities can assess a key competence, e.g. the ability to cope with pressure, why should the employer be prevented from using them? And what is the difference between asking in interview ‘tell me how you react to pressure’, and asking on questionnaire ‘do you get upset if someone criticises you?’? We could even argue that questionnaires are less intrusive, because no one actually needs to know what answer you give to a particular question; the answer is added to a dozen others to generate a ‘resilience’ score. In an interview, on the other hand, the interviewer necessarily hears what you say about coping with criticism.
Stress We suggested that assessments should not cause ‘unnecessary’ stress. Assessment often proves stressful for applicants, in all sorts of ways: • they realise they are doing rather poorly on ability tests; • they realise they are not performing very well in a group exercise or presentation in an AC;
0470861630_17_cha16.fm Page 326 Thursday, December 2, 2004 8:34 PM
326
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• they are put under pressure by the interviewer (not necessarily intentionally; they have claimed to know a lot about cost accounting, and the interviewer uncovers the fact that they do not); • they are put under pressure by other applicants; for example, the common scenario of the applicant at an assessment centre who thinks dominating the group or talking over everyone is the way to succeed, which can prove very stressful for the rest of the group. It would be very difficult to design an assessment that could not cause any stress to any applicant. Many jobs are stressful, so ability to cope with pressure is often a core competence, which should be assessed. We can only suggest that stress is a matter of judgement for the assessors: it all depends on how stressful the job is, and how stressful the assessment is. But it is very important to base the assessment on a competence analysis: what sort of pressure does the job create and how can we assess ability to cope with it? The OSS assessment centre for selecting spies used a stressful mock interrogation role play, because the ‘job’ might include facing a real interrogation. One occasionally encounters interviewers—not so often these days—who seem to like ‘putting people on the spot’ for the sake of it. It is very easy to unsettle someone in an interview by asking difficult or embarrassing questions. Applicants are unlikely to object, because they want to get the job, and the interviewer determines whether they do or not. Putting people under pressure without a clear purpose is poor practice, and potentially bad PR as well.
GENERAL PRINCIPLES Ethics has been defined as ‘concern with moral judgement and standards of conduct’. This definition does not help us all that much. It immediately poses the question of whose moral judgement and standards of conduct. Moral judgement and standards of conduct can change over time, and differ from one society to another. Employers in the UK and North America currently value ‘diversity’; Nazi Germany very conspicuously did not. Some parts of the world today have very different attitudes to employing women. Ethical standards are not fixed and absolute, but change from time to time and from country to country, a point described as ‘ethical relativism’.
Hard and Soft HR Ethical standards also vary within a particular society. In particular, we can point to two different approaches to HR that contrast quite sharply in many places, commonly referred to as ‘hard’ and ‘soft’. ‘Hard’ HR: • seeks to maximise productivity and profitability; • sees the manager as answerable primarily to the shareholders;
0470861630_17_cha16.fm Page 327 Thursday, December 2, 2004 8:34 PM
PROFESSIONAL AND ETHICAL ISSUES
327
• emphasises the scope and exercise of management prerogative; • sees the worker as a commodity (or ‘resource’); • behaves towards fellow human beings with the overriding objective of extracting added value. The implications for assessment are: • select the most effective workers by whatever means can achieve this; • the only reason not to use any assessment system is if the law prohibits. ‘Soft’ HR: • • • • •
seeks to win the hearts and minds of the workforce; seeks just distribution of rewards and risks between employer and employee; thinks it important to consider the interests of the workers; favours consultation and development; prefers to avoid sacking people unless really necessary.
The implications for assessment are: • • • •
be prepared to take on less effective persons and hope to develop them; be slower to terminate unsatisfactory employees; be more concerned about perceived fairness of assessment; be more concerned about intrusion, privacy and stress in assessment.
The past 20 years have seen a move toward ‘harder’ approaches to HR, in the face of domestic and international competition, and changes in government policy. To some extent the ‘softer’ approach is linked to paternalistic private industry, a larger public sector, including in the UK the ‘nationalised’ industries, and greater power for employees through union membership, which has declined over the past 20 years.
CASE STUDY It often easier to define ethical standard by focusing on clearly unethical behaviour. We will start with two US surveys, then go on to a less formal UK survey of blatant and subtle unethical practice. The 1991 American survey (Danley et al., 1991) lists the 10 most serious ethical situations, which include: • favouritism in hiring, promotion (and elsewhere) caused by friendship with top management; • sex and race discrimination in recruitment, selection or promotion; • breach of confidentiality; • allowing pay or promotion to be affected by non-performance factors in appraisal.
0470861630_17_cha16.fm Page 328 Thursday, December 2, 2004 8:34 PM
328
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The rest of the list was less relevant: sexual harassment, inconsistent discipline etc. The 1997 survey (Mathis and Jackson, 1997) listed seven ‘specific ethical issues’ including: • withholding information on a problem employee from another employer; • investigating credit and crime records of applicants. The other five again did not concern assessment, e.g. enforcing workplace smoking bans. The UK survey came up with: • agreeing, or being asked to agree to, direct discrimination: ‘this job isn’t suitable for women because there’s a lot of swearing goes on’; • collecting psychological test data on the assurance that they were for validation research, then using them for individual assessment; • asking HR to test everyone, then find a reason to sack Jones; • using 360° to find a reason to sack, on the argument that with enough views of enough aspects of Jones, management can always find something negative. One abuse is subtler, but seems to us particularly common: using elaborate assessment to give an appearance of objectivity to a decision that has already been made: • Five people are assessed by personality and ability tests and interview. The successful applicant has lower scores on the ability tests than some other applicants, is not an especially good match to the competences on the personality questionnaire, but does happen to be an internal candidate. • Four people are assessed by two-day assessment centre, including group exercises, in-tray exercises, ability and personality tests, and interview. The successful candidate does conspicuously poorly on the in-tray exercise, and does not do significantly better on tests or group exercises. Someone who knows the organisation well had confidently predicted before the assessment that this person would be successful.
0470861630_18_cha17.fm Page 329 Thursday, December 2, 2004 8:35 PM
CHAPTER 17
The Future of Assessment
OVERVIEW • We offer some thoughts about developments in the short term. • We offer some speculations about possible technical developments. • We offer some radical ideas about possible developments in assessment, including DNA testing. • We outline some possible changes in the law concerning assessment. • We outline some factors that might limit extension of legal controls. • We consider the problem of reconciling efficient selection and full employment. • We describe the terminally sick organisation.
INTRODUCTION Crystal ball gazing is always fun, occasionally alarming and often inaccurate. We can look into the future in the short term, and in the longer term. We can consider what is technically feasible, and what is likely to be accepted. We can also try to predict changes in the law as it relates to assessment and selection.
THE SHORT TERM We can list some likely trends in the next ten years in how assessment is done: • A continuing proliferation of new psychological tests, few of which offer any real improvement over the ones we have now. Many test users see advantages in devising their own tests. They can be certain applicants have not been tested with them before. They have more control over the materials they use, and avoid arbitrary restrictions imposed by publishers, most notoriously on the right to train. • A continuing decline in the use of references, as more and more employers provide only ‘name, rank and number’ minimal references. Consequently, fewer and fewer employers will find it useful to ask for references.
0470861630_18_cha17.fm Page 330 Thursday, December 2, 2004 8:35 PM
330
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
• Increasing use of computers and the web, especially in the recruitment and sifting stages. Perhaps also a dawning awareness that computerised sifting needs to prove that it works. • More awareness of the importance of teams in work, and the importance of selecting teams rather than individuals. • Grade inflation in school and university exams will devalue educational qualifications, and encourage more employers to use ability tests. Already some universities are considering whether to use tests to select students, saying the school exams do not differentiate. Some employers are saying the same about university degrees. We can also list some likely trends in what is assessed: • A continuing emphasis on assessing the person, as a person, rather than assessing specific abilities that are very closely job related. Employers need people who can adapt to new skills, fill a variety of roles etc. • More emphasis on assessing applicants’ fit to the organisation’s culture. • More emphasis on selecting for qualities other than technical competence or work performance. Some employees need to be able to cope with stress. Others need to be able to cope with aggressive or violent customers. • Increasing globalisation has created a need for more pan-cultural assessments. We already have the Global Personality Inventory (described in Chapter 4); we are likely to see more ‘global’ psychological tests, interview systems, assessment centre programmes etc. • Increasing globalisation might make it necessary to assess more specifically the ability to fit into other cultures, or work ‘away from home’, an issue that arises already with ex-patriate staff. • An increasing emphasis on security may create a need to assess for security risk. This is likely to prove very difficult, for several reasons: faking good and concealment, links with ethnicity, and a low base rate, hence a lot of false positives.
TECHNICAL REFINEMENTS We are likely to see some technical refinements.
Voice Recognition and Transcription Software Voice recognition and transcription software is now available, so it may soon be possible to generate transcripts of interviews, group discussions and other AC exercises, very quickly and cheaply. This will be useful at several levels: • In case of dispute about interview questions. Did the interviewer ask about child care arrangements? How exactly was the question phrased? • For quality checks on interviewers.
0470861630_18_cha17.fm Page 331 Thursday, December 2, 2004 8:35 PM
THE FUTURE OF ASSESSMENT
331
• To extract far more information, and more accurate information, from interviews and group exercises. • To allow interviews to be assessed later, or by other people. • To give applicants more comprehensive feedback on their performance.
Computer Simulations Computers can now generate realistic moving images of people moving and talking; this is already done in computer games and in the cinema. It may be possible to generate role plays or other exercises using computer-generated ‘customers’, ‘complainants’ or ‘appraisees’. These would be more convenient and economical to use, and more consistent that a human role player. The problem with simulations is not so much generating the images and sounds, but providing a plausible and realistic script. For example, in a sales roles play, if the applicant lists four good reasons for buying the product, the ‘customer’ agrees. If the applicant is being too ‘pushy’, the ‘customer’ become defensive or uncooperative. A computerised role play would need to identify what the applicant has said, then categorise it as resistance, willingness on uncertainty, then select the right response to make.
MORE RADICAL NEW POSSIBILITIES These possibilities are radical only in the sense that they cannot be used at present. They already exist, or very soon will. Many would be illegal under present Data Protection laws.
Enhanced Biodata Systems A wealth of potentially useful information exists, through computerised credit systems, which can record what people spend their money on, and so reveal their leisure interests, drinking habits etc. This information is already used extensively by market researchers. It could possibly be used to assess people: what you choose to spend your money on may be very revealing.
Health and Physical Fitness If we assume fit people work better, and are less likely to be off work sick, it is logical to assess applicants’ physical health, and also their weight, smoking, drinking and any other behaviours that affect health. This information already exists in medical files, which employers have no access to at present.
0470861630_18_cha17.fm Page 332 Thursday, December 2, 2004 8:35 PM
332
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
Performance Appraisal Prospective employers might find present and previous employers’ performance appraisal records very useful, especially if the information were accurate and not too ‘lenient’. Other personnel records might also be useful: sickness, absence, lateness, disciplinary etc. Incorporating PA data has been suggested in the USA as a way of reviving the traditional reference letter.
CCTV The Thought Police in George Orwell’s 1984 could watch everyone all the time: ‘You had to live . . . in the assumption that every sound you made was overhead, and, except in darkness, every movement scrutinized’. Many public places now have cameras that can watch us and listen to us. Face recognition software may make it possible to track people’s movements. We have the makings of an unfixable input to a possible assessment system. The employer can find out how applicants spend their spare time, and who their friends and associates are. As the old saying has it: ‘know a man by the company he keeps’.
DNA We have known for some time that individual differences in ability have a substantial heritable element. Research has now identified genes associated with high and low mental ability. In the future it may be possible to assess mental ability from a sliver of skin or drop of saliva. Personality, as assessed by inventory, also has a substantial heritable element, a fact which often surprises people who think personality is entirely shaped by upbringing. Differences in extraversion, anxiousness, conscientiousness etc. are to some extent set at the moment of conception (but developed and refined by life experiences). This implies research will eventually identify particular genes associated with the differences between people we describe as personality, and that it may eventually be possible to assess personality by DNA testing. DNA testing of ability and personality will have some striking features. • It will bypass entirely many measurement problems of mental ability, such as item bias, test anxiety or motivation. DNA testing is entirely passive, and can be done without the cooperation or even knowledge of the person tested. • It will also bypass entirely the faking problem in personality assessment. • DNA testing will assess inborn potential, unaffected by culture, upbringing, education and training, or social disadvantage, as well as accident, injury or illness. There could be big differences between potential and actual ability in some people (which might leave a role for conventional tests). • We inherit biological differences, which underlie psychological differences, in ways we do not understand very clearly yet. A DNA account of personality may
0470861630_18_cha17.fm Page 333 Thursday, December 2, 2004 8:35 PM
THE FUTURE OF ASSESSMENT
333
bear no resemblance at all to the big five, or the 16PF, or any other current model of personality. It may bear little resemblance either to the everyday words we use to describe personality. There might be a gene corresponding to what we call ‘pushy’, or there might be ten (if pushiness has many different origins), or there might be none (if pushiness is entirely learned behaviour). • If DNA testing worked, it could put many psychologists out of business! DNA testing might also prove useful for some more specific points. For example, we may find a link with certain types of violent behaviour. Many US employers seek ways of screening applicants for possible violent tendencies, and can get in trouble if they do not use something that might have predicted subsequent violence. On the other hand, a genetic basis for violent behaviour might make it look more like a disability, so screening for it would create a different set of legal problems. DNA testing of ability and personality is certain to be very controversial, and may be closely regulated, or even prohibited.
CHANGES IN THE LAW We can envisage some likely or possible changes in the law regulating assessment.
Limits on Particular Assessments Restrictions on the use of psychological tests are always a possibility. Psychological tests and psychologists are widely mistrusted. There is always the risk of a scandal blowing up, that results in restrictions on use of tests, either through legislation or a court ruling.
Limits on What You Can Assess People By We already have some legal limits on the information you can use to assess people: • We are specifically prohibited from taking account of gender, except in rare special cases. The same is true of ethnicity and disability. • Using the applicant’s credit history is not allowed in the UK or parts of the USA. (To be precise, using credit checking agency information is not allowed in the UK, rather the credit history itself.) It has been suggested that applicants should not be assessed on any ‘non-job-relevant’ activities, which encompasses the applicant’s leisure interests, lifestyle and possibly also personality. This would severely restrict most ‘sifting’ systems, which often rely heavily on non-work activity. Brenkert (1993) argues that the ‘commercial relationship does not entitle [the employer] to probe the attitudes, motives, and beliefs of the person beyond their own
0470861630_18_cha17.fm Page 334 Thursday, December 2, 2004 8:35 PM
334
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
statements, record of past actions and the observations of others’. This excludes: the polygraph, drug use testing, genetic screening and personality inventories. Brenkert’s argument would restrict assessment to educational achievement and work experience, results of technical and cognitive tests, in-tray exercises, role playing and group exercises, as well as references and interview structured around the candidate’s job knowledge. This links to the idea that we should only assess the specific ability to carry out the job activities, and should therefore only assess specific knowledge and skills. Many employers want also to assess applicants’ longer-term potential. Some employers already avoid assessing applicants on things they cannot control, such as family background.
Protected Minorities We may see more protected minorities created. Gender, ethnicity and disability have recently been joined by religion and sexual orientation in the UK. Age is scheduled to be added within a few years. What other groups might acquire protection? • Social exclusion. In the UK, social exclusion and disadvantage are a concern of the present government, and have been specifically mentioned in university admission. Why not employment also? If people from certain backgrounds find it difficult to pass university entrance tests, and need special provision, might not the same be true of employment tests? Social exclusion and disadvantage may prove more difficult to define and assess than gender or ethnicity. • Low intelligence. While some psychologists and employers seem happy with the idea of differences in intelligence, the government and the general population seem either unaware or unreceptive. Characterising people as unintelligent, or explaining their behaviour as caused by low intelligence, seems increasingly politically unacceptable. It seems unlikely that people of low intelligence would be listed directly as a protected minority, because that implies accepting the concept, and perhaps using the tests. However, it is more possible that something closely linked to low intelligence might acquire protected minority status, perhaps low educational achievement. • Criminal behaviour. People already have the right not to report some criminal convictions, in the interests of helping them get work. At present in the UK, only fairly trivial offences can lapse in this way.
Ethnic Differentiation At present equal opportunities thinking tends to divide people into white and non-white when examining workforce composition. The ‘census’ categories, used on many UK equal opportunities monitoring forms, distinguish half a dozen different ethnic
0470861630_18_cha17.fm Page 335 Thursday, December 2, 2004 8:35 PM
THE FUTURE OF ASSESSMENT
335
minorities. Sue Scott’s analysis of graduate recruitment (described in Chapter 8) has shown these different minorities differ a lot in success in selection programmes. It may be necessary in future to start considering more closely adverse impact on each minority group. We might also possibly discern a trend for more and more, and hence smaller and smaller, groups to see themselves as distinct, and deserving their correct representation in employment.
Privacy The idea of a right to privacy is gaining ground, and could have implications for selection assessments. Personality tests in particular may be especially vulnerable because they can appear very intrusive to applicants, as well as not very job related. Or, as we have noted, selection assessment could be restricted to knowledge and skills, and using information about the person outside work be banned.
Data Protection Employees now have right of access to personnel files. Applicants have right of access to interview notes, test data and (probably) references. Some organisations list specified materials, e.g. interview notes, that should be retained in case the applicant wants to see them, and assessors are warned not to write down anything unsuitable. Suppose assessors were required to keep a detailed record of the whole process, including possibly a record of discussions about which of two equally matched applicants gets the job and why.
Adverse Impact We may see more widespread and systematic adverse impact analyses in the UK. The Sex and Power survey by the EOC has recently (January 2004) documented how few women have reached the top ranks of the police, judiciary, universities, media etc. (Equal Opportunities Commission, 2004). Adverse impact analyses based on ethnicity have been surprisingly few in the UK, although it seems likely a presumption of discrimination could be established fairly easily in many sectors, and within many organisations.
References It has been suggested in the USA that employers should be obliged to write references that would have to include certain information, e.g. reason for termination, any history of violence, possibly even performance appraisal data.
0470861630_18_cha17.fm Page 336 Thursday, December 2, 2004 8:35 PM
336
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
We can also discern some factors which might set limits to legal controls on selection.
Globalisation Europe and the USA have to compete with countries where wages are lower and employers less regulated. Organisations that find it too expensive or onerous to employ people in Europe or North America can move their operations to cheaper, less extensively regulated countries. Or they face the risk of being unable to compete and going out of business. John Hunter and Frank Schmidt in 1996 suggested America should consider a two-tier economy in ability and fair employment law. The first tier is international, where the country must be competitive, so employers must be free to select the most able. The second tier is the domestic economy, which is not subject to foreign competition, and where the reduced efficiency caused by employing the less able will cause less harm. What sectors fall into this second tier? Hunter and Schmidt mention only catering, insurance and hairdressing. A country that was very set on fair employment could protect more industries from foreign competition by trade barriers.
‘Backlash’ The past few years have seen some rethinking on preferential treatment in the USA, in education rather than employment. California in 1997 abandoned ethnic minority quotas in education and public employment. Several other states have done the same, and more are said to be contemplating it.
‘Emergencies’ In wartime, people’s rights tend to be suddenly and drastically curtailed. When World War II started, the British government quickly gave itself powers to imprison people without trial, and to direct people into any employment where they were needed. The elaborate system of fair employment legislation could disappear very quickly if it seemed to be hindering the country’s ability to cope with an external threat, international terrorism for example.
The Broader Historical Perspective The developed countries are today very concerned with rights and equality. We do not need to look very far back in our own history to find a world that was not at all concerned with equality or human rights. It is also very easy to point to many other countries where people presently have few rights.
0470861630_18_cha17.fm Page 337 Thursday, December 2, 2004 8:35 PM
THE FUTURE OF ASSESSMENT
337
BROADER STRATEGIC ISSUES Psychological Contract At one time employees and employers had a ‘psychological contract’: the employee gave loyalty and commitment; in exchange the employer provided a job for life, with training and promotion etc. In the hard times of the 1980s many employees found the psychological contract ‘wasn’t worth the paper it wasn’t written on’. When employers needed to, they broke the contract and made people redundant, in large numbers. The breaking of the psychological contract should make employees view the organisation differently, perhaps more realistically. Employees might be less willing to go beyond their contractual obligations. This might reduce the scope for personality measures, because individual differences in organisational citizenship might be less important, or less readily translated into differences in performance.
Power Balance In the UK since 1979 the power of the trade unions has declined greatly, which has correspondingly increased the power of management. Organisations are free to pursue productivity and profitability. Unsatisfactory or superfluous employees can be ‘released’. Performance appraisal systems can be introduced and used to determine pay and promotion. It could be argued that the easier it is terminate unsatisfactory staff, the less vital effective selection becomes.
Core and Fringe Workers Many organisations are outsourcing as many functions as possible. They do not employ canteen staff; they contract out catering, to whoever offers to do it for the least money. The catering staff work for subcontractors, and typically get paid less, and work under less favourable terms and conditions, than if they were employed by the organisation. They have become fringe workers, with no long-term security of employment or career structure. A much smaller proportion of people are core workers, who are full-time, long(er)-term employees. The fringe workers are often on minimum wages. Minimum wages are fine for students and young people, but not generally enough to support a household and a family. Many fringe workers are employed as ‘temps’ by temporary employment agencies. Fringe workers may need less careful selection, because the employer has no long-term commitment to them. They can be assessed by job tryout, and ‘let go’ if they do not perform acceptably.
0470861630_18_cha17.fm Page 338 Thursday, December 2, 2004 8:35 PM
338
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
‘Home’ and ‘Tele’-Working As we have been told frequently, many types of work can be done from home; the organisation can avoid the expense of providing premises, and workers can avoid the hassle of commuting. This has two possible implications for assessment. The organisation clearly will need people who are able and willing to work without supervision, so may select for time management and conscientiousness. But if the ‘tele-workers’ are self employed or casuals, the organisation might not need to assess them at all; if they do not perform they are not given any more work.
Rising Unemployment An employer who succeeds in recruiting able, productive workers needs fewer of them. If all employers use highly accurate tests to select very productive workers, the number of jobs will shrink, creating more unemployment. If employers started exchanging information, the untalented would find themselves never even being shortlisted. This has some alarming implications: • The creation of a steadily growing, unemployed, disillusioned and resentful underclass. • What will all those unemployed people do? Will the government have to bring in work programmes for the disadvantaged to keep them occupied and out of trouble? • At the other end of the distribution of ability, a shrinking workforce, of more able people, works harder and longer to maximise productivity. In the process, they wear themselves out, and have no time left to enjoy life. Many managers already see this happening to themselves. • If fewer and fewer people produce more and more, who is going to buy it? How are they going to pay for it?
An Unemployable Minority? Nearly seventy years ago Raymond Cattell (1936), author of the 16PF questionnaire, made some pessimistic comments about employment prospects for people with limited mental ability in a complex industrial society: ‘the person of limited intelligence is not so cheap an employee as he at first appears. His accident proneness is high and he cannot adapt himself to changes in method’. More recently Linda Gottfredson (1997) has raised the same issue. She notes that the American armed services have three times made the experiment of employing ‘low-aptitude’ recruits, once when short of recruits during World War II, once as an idealistic experiment during the 1960s, and once by mistake in the early 1980s when they miscalculated their norms. Gottfredson says ‘these men were very difficult and costly to train, could not learn certain specialities, and performed at a lower average level once on a job’. Low-aptitude recruits take between two and five times as long to train, and their training might need to be stripped of anything theoretical or abstract. What is the
0470861630_18_cha17.fm Page 339 Thursday, December 2, 2004 8:35 PM
THE FUTURE OF ASSESSMENT
339
threshold of unemployability in terms of tested intelligence? Cattell estimated it at IQ 85, while Gottfredson mentions a figure of IQ 80. However, there is no research on what proportion of persons with low IQs are unemployed or unemployable.
More Unemployable Minorities? Personality tests raise the same alarming possibility: a section of the population whose employment prospects will be rendered less promising by having characteristics few employers will want. Perhaps people with conscientiousness scores in the bottom 15% will find themselves not in great demand. They will join the 15% or so whose low mental ability places them in the same position. And because there is no correlation between mental ability and conscientiousness, they will largely be a different 15%. Suppose they are then joined by high scorers on neuroticism? Another 15%—a different 15% again—with poor employment prospects. An increasing proportion of humanity could turn out to be people employers are not very keen on employing. Yet most people see employment as a right, and most governments assume everyone will be, or should be, employed. The government’s answer to this problem tends to be ‘training’: provide more training courses for the hard-to-place ‘job seeker’. Training may help, although no follow-up of government training schemes seems to have been reported. Training may not solve the problem altogether, because intelligence and all five basic dimensions of personality have a substantial heritable element, which implies there may be limits on how much people can be changed.
THE TERMINALLY SICK ORGANISATION We tend to assume that all employers genuinely want the best applicants; Northcote Parkinson (1958) thinks this very naive: ‘if the head of an organisation is second rate, he will see to it that his immediate staff are all third-rate: and they will, in turn, see to it that their subordinates are fourth rate’. Such organisations suffer injelitance—‘a disease of induced inferiority’, compounded equally of incompetence and jealousy. He lists a series of diagnostics of injelitance: • senior staff are plodding and dull; • other staff wandering aimlessly about, ‘giggling feebly’ and losing important documents; • surly porters and telephonists; • out-of-order lifts and out-of-date notices; • very low standards and aspirations: ‘little is being attempted, nothing is being achieved’; • terminal smugness: the organisation is doing a very good job, reaching its very low standards; anyone who disagrees is a troublemaker; • smugness is most apparent in the canteen, where the staff consume an ‘uneatable, nameless mess’, and congratulate themselves on having catering staff who can produce it so cheaply.
0470861630_18_cha17.fm Page 340 Thursday, December 2, 2004 8:35 PM
340
PSYCHOLOGICAL ASSESSMENT IN THE WORKPLACE
The ‘injelitant’ organisation does not fill up with talentless people accidentally: dull smug people at its core deliberately recruit even duller smugger people, to protect their own positions. And what better way can there be to perpetuate incompetence than the traditional interview? Mediocrities can be selected and promoted, using the code words ‘soundness’, ‘teamwork’ and ‘judgement’. And what greater threat to injelitant organisations can there be than objective tests of ability, which might introduce unwelcome, disruptive, ‘clever’ people? Parkinson thinks injelitance a terminal illness of organisations. It can only be cured by dismissing all the staff, and burning the buildings to the ground, for injelitance lingers in the fabric like dry rot. Parkinson does make one interesting suggestion: that infected personnel should be ‘dispatched with a warm testimonial to such rival institutions as are regarded with particular hostility’.
DISCUSSION POINTS To conclude, ask yourself what form selection and assessment might take in environments less like our own. What assessments of workers might we make, and how might we make them: • • • • • •
in 20 or 200 years time when the oil has run out? in a society that uses slave labour? in a society that has unlimited free electricity? in the third world? in a Soviet labour camp? in a society that does not value equality or human rights?
0470861630_19_ref01.fm Page 341 Tuesday, December 7, 2004 11:24 AM
References
Aamodt, M.G., Bryan, D.A. and Whitcomb, A.J. (1993) Predicting performance with letters of recommendation. Public Personnel Management, 22, 81–90. Allport, G.W. (1937) Personality: A Psychological Interpretation. Holt: New York. Arthur, W., Day, E.A., McNelly, T.L. and Edens, P.S. (2003) A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56, 125–154. Arvey, R.D. (1979) Fairness in Selecting Employees. Addison Wesley: Reading, MA. Ash, R.A. (1980) Self-assessments of five types of typing ability. Personnel Psychology, 33, 273–282. Atkins, P.W.B. and Wood, R.E. (2002) Self-versus others’ ratings as predictors of assessment center ratings: validation evidence from 360-degree feedback programs. Personnel Psychology, 55, 871–904. Baird, L.L. (1985) Do grades and tests predict adult accomplishment? Research in Higher Education, 23, 3–85. Barclay, J.M. (2001) Improving selection interviews with structure: organisations’ use of ‘behavioural’ interviews. Personnel Review, 30, 81–101. Barrett, G.V. (1992) Clarifying construct validity: definitions, processes, and models. Human Performance, 5, 13–58. Barrett, G.V. (1997) An historical perspective on the Nassau County Police entrance examination: Arnold v Ballard (1975) revisited. The Industrial/Organisational Psychologist. Available from: http://www.siop.org/tip/backissues/tipoct97/BARRET~1.htm. Barrick, M.R., Mount, M.K. and Strauss, J.P. (1994) Antecedents of involuntary turnover due to reduction in force. Personnel Psychology, 47, 515–535. Barrick, M.R., Mount, M.K. and Judge, T.A. (2001) Personality and performance at the beginning of the new millennium: what do we know and where do we go next? International Journal of Selection and Assessment, 9, 9–30. Barrick, M.R., Stewart, G.L., Neubert, M.J. and Mount, M.K. (1998) Relating member ability and personality to work team processes and effectiveness. Journal of Applied Psychology, 83, 377–391. Baxter, J.C., Brock, B., Hill, P.C. and Rozelle, R.M. (1981) Letters of recommendation: a question of value. Journal of Applied Psychology, 66, 296–301. Becker, T.E. and Colquitt, A.L. (1992) Potential versus actual faking of a biodata form: an analysis along several dimensions of item type. Personnel Psychology, 45, 389–406. Becker, W.C. (1960) The matching of behavior rating and questionnaire personality factors. Psychological Bulletin, 57, 201–212. Bliesener, T. (1996) Methodological moderators in validating biographical data in personnel selection. Journal of Occupational and Organizational Psychology, 69, 107–120. Bobko, P. and Roth, P.L. (1999) Derivation and implications of a meta-analytic matrix incorporating cognitive ability, alternative predictors, and job performance. Personnel Psychology, 52, 561–589.
0470861630_19_ref01.fm Page 342 Tuesday, December 7, 2004 11:24 AM
342
REFERENCES
Borman, W.C. (1974) The rating of individuals in organisations: an alternative approach. Organisational Behavior and Human Performance, 12, 105–124. Born, M.P., Kolk, N.J. and van der Flier, H. (2000) A Meta-analytic Study of Assessment Center Construct Validity. Paper given at 15th Annual SIOP conference, New Orleans. Boudreau, J.W., Boswell, W.R. and Judge, T.A. (2000) Effects of personality on executive career success in the United States and Europe. Journal of Vocational Behavior, 58, 53–81. Boyatzis, R. (1981) The competent manager. John Wiley & Sons: New York. Bray, D.W. and Grant, D.L. (1966) The assessment center in the measurement of potential for business management. Psychological Monographs, 80(625). Brenkert, G.G. (1993) Privacy, polygraphs and work. In White, T. (ed.) Business Ethics: A Philosophical Reader. Macmillan: New York. British Psychological Society (2003) Psychological Testing: A User’s Guide. British Psychological Society: Leicester. Available from: http://www.bps.org.uk/documents/Psychuser.pdf. Brogden, H.E. (1950) When testing pays off. Personnel Psychology, 2, 171–183. Callinan, M. and Robertson, I.T. (2000) Work sample testing. International Journal of Selection and Assessment, 8, 248–260. Campion, J.E. (1972) Work sampling for personnel selection. Journal of Applied Psychology, 56, 40–44. Campion, M.A., Pursell, E.D. and Brown, B.K. (1988) Structured interviewing: raising the psychometric qualities of the employment interview. Personnel Psychology, 41, 25–42. Carroll, S.J. and Nash, A.N. (1972) Effectiveness of a forced-choice reference check. Personnel Administration, 35, 42–46. Cascio, W. and Phillips, N.F. (1979) Performance testing: a rose among thorns? Personnel Psychology, 32, 751–766. Cattell, R.B. (1936) The Fight for Our National Intelligence. P.S. King: London. Cattell, R.B. (1965) The Scientific Analysis of Personality. Penguin: Harmondsworth. Chartered Institute of Personnel and Development (2002) Recruitment and Retention Survey. CIPD: London. Collins, J.M., Schmidt, F.L., Sanchez-Ku, M., Thomas, L., McDaniel, M.A. and Le, H. (2001) Can basic individual differences shed light on the construct meaning of assessment center evaluations? International Journal of Selection and Assessment, 11, 17–29. Conway, J.M., Jako, R.A. and Goodman, D.F. (1995) A meta-analysis of inter-rater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80, 565–579. Cook, M. (1995) Performance appraisal and true performance. Journal of Managerial Psychology, 10, 3–7. Cortina, J.M., Goldstein, N.B., Payne, S.C., Davison, H.K. and Gilliland, S.W. (2000) The incremental validity of interview scores over and above cognitive ability and conscientiousness scores. Personnel Psychology, 53, 325–352. Cripps, B.D. (2004) Test feedback: comparisons between Level A and Level B. Selection and Development Review, 20(5). Daley, D.M. (1987) Performance appraisal and the creation of training and development expectations: a weak link in MBO-based appraisal systems. Review of Public Personnel Administration, 8, 1–10. Dany, F. and Torchy, V. (1994) Recruitment and selection in Europe: policies, practices and methods. In Brewster, C. and Hegewisch, A. (eds) Policy and Practice in European Human Resource Management: The Price Waterhouse Cranfield Survey. Routledge: London. Danley, J., Harrick, E., Strickland, D. and Sullivan, G. (1991) HR ethical situations. Human Resources Management, 26 June, 1–12. Davison, H.K. and Burke, M.J. (2000) Sex discrimination in simulated employment contexts: a meta-analytic investigation. Journal of Vocational Behavior, 56, 225–248. Di Milia, D., Smith, P. and Brown, D.F. (1994) Management selection in Australia: a comparison with British and French findings. International Journal of Selection and Assessment, 2, 80–90. Drucker, P.F. (1973) Management: Tasks, Responsibilities and Practices. Harper and Row: New York.
0470861630_19_ref01.fm Page 343 Tuesday, December 7, 2004 11:24 AM
REFERENCES
343
Dunnette, M.D. (1972) Validity Study Results for Jobs Relevant to the Petroleum Refining Industry. American Petroleum Institute: Washington, DC. Dunnette, M.D., McCartney, J., Carlson, H.C. and Kirchner, W.K. (1962) A study of faking behavior on a forced-choice self-description checklist. Personnel Psychology, 15, 13–24. Ekman, P. and O’Sullivan, M. (1991) Who can catch a liar? American Psychologist, 46, 913–920. Ellingson, J.E., Sackett, P.R. and Hough, L.M. (1999) Social desirability corrections in personality measurement: issues of applicant comparison and construct validity. Journal of Applied Psychology, 84, 155–166. Equal Opportunities Commission (2004) Sex and Power: Who Runs Britain? Equal Opportunities Commission: Manchester. Fleishman, E.A. and Mumford, M.D. (1991) Evaluating classifications of job behavior: a construct validation of the ability requirement scales. Personnel Psychology, 44, 523–575. Fletcher, C. (1997) Appraisal: Routes to Improved Performance, 2nd edn. IPD: London. Frieze, I.H., Olson, J.E. and Russell, J. (1991) Attractiveness and income for men and women in management. Journal of Applied Social Psychology, 21, 1039–1057. Furnham, A. and Stringfield, P. (1994) Congruence of self and subordinate ratings of managerial practices as a correlate of supervisor evaluation. Journal of Occupational and Organisational Psychology, 67, 57–67. Gaugler, B.B., Rosenthal, D.B., Thornton, G.C. and Bentson, C. (1987) Meta-analysis of assessment center validity. Journal of Applied Psychology, 72, 493–511. Gfroerer, J., Gustin, J., Virag, T., Folsom, R. and Rachal, J.V. (1990) The 1990 National Household Survey on Drug Abuse. National Institute on Drug Abuse: Washington, DC. Ghiselli, E.E. (1966) The Validity of Occupational Aptitude Tests. John Wiley & Sons: New York. Goldsmith, D.B. (1922) The use of a personal history blank as a salesmanship test. Journal of Applied Psychology, 6, 149–155. Goleman, D. (1995) Emotional Intelligence. Bloomsbury: London. Gottfredson, L.S. (1997) Why g matters: the complexity of everyday life. Intelligence, 24, 79–132. Grote, C.L., Robiner, W.N. and Haut, A. (2001) Disclosure of negative information in the letters of recommendation: writers’ intentions and readers’ experience. Professional Psychology—Research and Practice, 32, 655–661. Harris, M.M. and Schaubroek, J. (1988) A meta-analysis of self–supervisor, self–peer, and peer–supervisor ratings. Personnel Psychology, 41, 43–62. Harris, M.M., Dworkin, J.B. and Park, J. (1990) Pre-employment screening procedures: how human resource managers perceive them. Journal of Business and Psychology, 4, 279–292. Hartigan, J.A. and Wigdor, A.K. (1989) Fairness in Employment Testing. National Academy Press: Washington, DC. Hodgkinson, G.P., Daley, N. and Payne, P.L. (1996) Knowledge of, and attitudes towards, the demographic time bomb. International Journal of Manpower, 16, 59–76. Hoffman, C.C. (1999) Generalizing physical ability test validity: a case study using test transportability, validity generalisation, and construct-related validation evidence. Personnel Psychology, 52, 1019–1041. Hogan, J. (1991) Physical abilities. In Dunnette, M.D. and Hough, L.M. (eds) Handbook of Industrial–Organisational Psychology. Consulting Psychologists Press: Palo Alto, CA. Hough, L.M. (1998) Effects of intentional distortion in personality measurement and evaluation of suggested palliatives. Human Performance, 11, 209–244. Howard, K. (2003) Psychometric Testing for the Housing Sector: A Comprehensive Review. Property People: London. HR Gateway Editorial (2004) No Brummies please we are professionals here. HR Gateway, Report N/4186, 12 February. Available from: http:/ /www.hrgateway.co.uk. Huffcutt, A.I. and Woehr, D.J. (1999) Further analyses of employment interview validity: a quantitative evaluation of interviewer-related structuring methods. Journal of Organizational Behavior, 20, 549–560. Huffcutt, A.I., Conway, J.M., Roth, P.L. and Stone, N.J. (2001) Identification and metaanalytic assessment of psychological constructs measured in employment interviews. Journal of Applied Psychology, 86, 897–913.
0470861630_19_ref01.fm Page 344 Tuesday, December 7, 2004 11:24 AM
344
REFERENCES
Huffcutt, A.I., Conway, J.M., Roth, P.L. and Klehe, U.C. (2004) The impact of job complexity and study design on situational and behavior description interview validity. International Journal of Selection and Assessment. Hughes, J.F., Dunn, J.F. and Baxter, B. (1956) The validity of selection instruments under operating conditions. Personnel Psychology, 9, 321–324. Hunter, J.E. and Hunter, R.F. (1984) Validity and utility of alternate predictors of job performance. Psychological Bulletin, 96, 72–98. Hunter, J.E. and Schmidt, F.L. (1996) Intelligence and job performance: economic and social implications. Psychology, Public Policy and Law, 2, 447–472. International Test Commission (2001) International Guidelines for Test Use. Available from http:/ /www.intestcom.org/test_use_full.htm. IRS (2002) The check’s in the post. IRS Recruitment Review, 752, 34–42. Janz, T. (1982) Initial comparisons of patterned behavior description interviews versus unstructured interviews. Journal of Applied Psychology, 67, 577–580. Jones, A. and Harrison, E. (1982) Prediction of performance in initial officer training using reference reports. Journal of Occupational Psychology, 55, 35–42. Judge, T.A. and Bono, J.E. (2000) Five-factor model of personality and transformational leadership. Journal of Applied Psychology, 85, 751–765. Judge, T.A. and Ilies, R. (2002) Relationship of personality to performance motivation: a meta-analytic review. Journal of Applied Psychology, 87, 797–807. Judge, T.A., Martocchio, J.J. and Thoresen, C.J. (1997) Five-factor model of personality and employee absence. Journal of Applied Psychology, 82, 745–755. Judge, T.A., Higgins, C.A., Thoresen, C.J. and Barrick, M.R. (1999) The big five personality traits, general mental ability, and career success across the life span. Personnel Psychology, 52, 621–652. Kalin, R. and Rayko, D.S. (1978) Discrimination in evaluation against foreign accented job candidates. Psychological Reports, 43, 1203–1209. Kane, J.S. and Lawler, E.E. (1978) Methods of peer assessment. Psychological Bulletin, 85, 555–586. Keenan, T. (1997) Selection for potential: the case of graduate recruitment. In Anderson, N. and Herriott, P. (eds) International Handbook of Selection and Appraisal. John Wiley & Sons: Chichester. Kelly, G.A. (1955) A Theory of Personality: The Psychology of Personal Constructs. W.W. Norton: New York. Kraiger, K. and Ford, J.K. (1985) A meta-analysis of ratee race effects in performance ratings. Journal of Applied Psychology, 70, 56–65. Lance, C.E., Newbolt, W.H., Gatewood, R.D., Foster, M.S., French, N.R. and Smith, D.E. (2000) Assessment center exercise factors represent cross-situational specificity, not method bias. Human Performance, 12, 323–353. Latham, G.P. and Wexley, K.N. (1981) Increasing Productivity through Performance Appraisal. Addison Wesley: Reading, MA. Latham, G.P. and Wexley, K.N. (1994) Increasing Productivity through Performance Appraisal, 2nd edn. Addison Wesley: Reading Mass. Latham, G.P., Saari, L.M., Pursell, E.D. and Campion, M.A. (1980) The situational interview. Journal of Applied Psychology, 65, 422–427. Lewin, A.Y. and Zwany, A. (1976) Peer nominations: a model, literature critique and a paradigm for research. Personnel Psychology, 29, 423–447. Lievens, F. (2001a) Assessors and use of assessment center dimensions: a fresh look at a troubling issue. Journal of Organizational Behavior, 22, 203–221. Lievens, F. (2001b) Understanding the assessment centre process: where are we now? International Review of Industrial and Organisational Psychology, 16, 246–286. Lievens, F. and Conway, J.M. (2001) Dimension and exercise variance in assessment center scores: a large scale evaluation of multitrait-multimethods studies. Journal of Applied Psychology, 86, 1202–1222.
0470861630_19_ref01.fm Page 345 Tuesday, December 7, 2004 11:24 AM
REFERENCES
345
Longenecker, C.O., Sims, H.P. and Goia, D.A. (1987) Behind the mask: the politics of employee appraisal. Academy of Management Executive, 1, 183–193. Mabe, P.A. and West, S.G. (1982) Validity of self-evaluation of ability: a review and meta-analysis. Journal of Applied Psychology, 67, 280–296. MacFarland, J. (2000) Rejected by Resumix. Available from: http://www.govexec.com/ dailyfed/0900/092500ff.htm. Machwirth, U., Schuler, H. and Moser, K. (1996) Entscheidungsprozesse bei der Analyse von Bewerbungsunterlagen. Diagnostica, 42, 220–241. Mael, F.A. and Ashworth, B.E. (1995) Loyal from day one: biodata, organisational identification, and turnover among newcomers. Personnel Psychology, 48, 309–333. Marlowe, C.M., Schneider, S.L. and Nelson, C.E. (1996) Gender and attractiveness in hiring decisions—are more experienced managers less biassed? Journal of Applied Psychology, 81, 11–21. Martin, B.A., Bowen, C.C. and Hunt, S.T. (2002) How effective are people at faking personality questionnaires? Personality and Individual Differences, 32, 247–256. Mathis, R.L. and Jackson, J.H. (1997) Human Resource Management, 3rd edn. McGraw Hill: New York. McClelland, D.C. (1971) The Achieving Society. Van Nostrand: Princeton, NJ. McDaniel, M.A., Morgeson, F.P., Finnegan, E.B., Campion, M.A. and Braverman, E.P. (2001) Use of situational judgement tests to predict job performance: a clarification of the literature. Journal of Applied Psychology, 86, 730–740. McEvoy, G.M. and Beatty, R.W. (1989) Assessment centers and subordinate appraisals of managers: a seven-year examination of predictive validity. Personnel Psychology, 42, 37–52. McLeod, E.M. (1968) A Ripper handwriting analysis. The Criminologist, Summer. McManus, M.A. and Kelly, M.L. (1999) Personality measures and biodata: evidence regarding their incremental predictive value in the life insurance industry. Personnel Psychology, 52, 137–148. Meritt-Haston, R. and Wexley, K.N. (1983) Educational requirements: legality and validity. Personnel Psychology, 36, 743–753. Meyer, H.H., Kay, E. and French, J.R.P. (1965) Split roles in performance appraisal. Harvard Business Review, 43, 123–129. Miner, J.B. (1971) Personality tests as predictors of consulting success. Personnel Psychology, 24, 191–204. Mls, J. (1935) Intelligenz und Fahigkeit zum Kraftwagenlenken. Proceedings of the Eighth International Conference of Psychotechnics, Prague. Morris, B.S. (1949) Officer selection in the British Army 1942–1945. Occupational Psychology, 23, 219–234. Mosel, J.N. and Goheen, H.W. (1958) The validity of the Employment Recommendation Questionnaire in personnel selection. I. Skilled trades. Personnel Psychology, 11, 481–490. Moser, K. and Rhyssen, D. (2001) Referenzen als eignungsdiagnostische Methode. Zeitschrift fur Arbeits- und Organisationspsychologie, 45, 40–46. Mount, M.K., Witt, L.A. and Barrick, M.R. (2000) Incremental validity of empirically keyed biodata scales over GMA and the five factor personality constructs. Personnel Psychology, 53, 299–323. Murphy, K.R. and Cleveland, J.N. (1995) Understanding Performance Appraisal: Social, Organisational, and Goal-Based Perspectives. Sage: Thousand Oaks, CA. Murphy, K.R. and DeShon, R. (2000) Interrater correlations do not estimate the reliability of job performance ratings. Personnel Psychology, 53, 873–900. Neter, E. and Ben-Shakhar, G. (1989) The predictive validity of graphological inferences: a meta-analytic approach. Personality and Individual Differences, 10, 737–745. Neuman, G.A. and Wright, J. (1999) Team effectiveness: beyond skills and cognitive ability. Journal of Applied Psychology, 84, 376–389. Normand, J., Salyards, S.D. and Mahoney, J.J. (1990) An evaluation of preemployment drug testing. Journal of Applied Psychology, 75, 629–639.
0470861630_19_ref01.fm Page 346 Tuesday, December 7, 2004 11:24 AM
346
REFERENCES
Parkinson, C.N. (1958) Parkinson’s Law. John Murray: London. Pearlman, K., Schmidt, F.L. and Hunter, J.E. (1980) Validity generalisation results for test used to predict job proficiency and training success in clerical occupations. Journal of Applied Psychology, 65, 373–406. Peres, S.H. and Garcia, J.R. (1962) Validity and dimensions of descriptive adjectives used in reference letters for engineering applicants. Personnel Psychology, 15, 279–286. Posthuma, R.A., Morgeson, F.P. and Campion, M.A. (2002) Beyond employment interview validity: a comprehensive narrative review of recent research and trends over time. Personnel Psychology, 55, 1–81. Rayson, M., Holliman, D. and Belyavin, A. (2000) Development of physical selection procedures for the British Army. Phase 2: relationship between physical performance test and criterion tasks. Ergonomics, 43, 73–105. Reddy, M. (1987) The Manager’s Guide to Counselling at Work. Methuen: London. Reilly, R.R. and Chao, G.T. (1982) Validity and fairness of some alternative employee selection procedures. Personnel Psychology, 35, 1–62. Ritchie, R.J. and Moses, J.L. (1983) Assessment center correlates of women’s advancement into middle management: a 7-year longitudinal analysis. Journal of Applied Psychology, 68, 227–231. Robertson, I.T. and Downs, S. (1989) Work-sample tests of trainability: a meta-analysis. Journal of Applied Psychology, 74, 402–410. Robie, C., Schmit, M.J., Ryan, A.M. and Zickar, M.J. (2000) Effects of item context specificity on the measurement equivalence of a personality inventory. Organizational Research Methods, 3, 348–365. Roe, A. (1956) The Psychology of Occupations. John Wiley & Sons: New York. Roehling, M.V. (1999) Weight-based discrimination in employment: psychological and legal aspects. Personnel Psychology, 52, 969–1016. Roth, P.L. and Bobko, P. (2000) College grade point average as a personnel selection device: ethnic group differences and potential adverse impact. Journal of Applied Psychology, 85, 399–406. Roth, P.L., Van Iddekinge, C.H., Huffcutt, A.I., Eidson, C.E. and Bobko, P. (2002) Corrections for range restriction in structured interview ethnic group differences: the values may be larger than researchers thought. Journal of Applied Psychology, 87, 369–376. Ryan, A.M., Daum, D., Bauman, T., Grizek, M., Mattimore, K., Nalodka, T. and McCormick, S. (1995) Direct, indirect and controlled observation and rating accuracy. Journal of Applied Psychology, 80, 664–670. Rynes, S.L., Orlitsky, M.O. and Bretz, R.D. (1997) Experienced hiring practices versus college recruiting: practices and emerging trends. Personnel Psychology, 50, 309–339. Sackett, P.R., Gruys, M.L. and Ellingson, J.E. (1998) Ability–personality interactions when predicting job performance. Journal of Applied Psychology, 83, 545–556. Salgado, J.F. (1998) Big five personality dimensions and job performance in army and civil occupations: a European perspective. Human Performance, 11, 271–289. Salgado, J.F. and Moscoso, S. (2002) Comprehensive meta-analysis of the construct validity of the employment interview. European Journal of Work and Organizational Psychology, 11, 299–324. Schmidt, F.L. and Hunter, J.E. (1998) The validity and utility of selection methods in personnel psychology: practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274. Schmidt, F.L. and Rader, M. (1999) Exploring the boundary conditions for interview validity: meta-analytic validity findings for a new interview type. Personnel Psychology, 52, 445–465. Schmidt, F.L., Hunter, J.E., McKenzie, R.C. and Muldrow, T.W. (1979) Impact of valid selection procedures on work-force productivity. Journal of Applied Psychology, 64, 609–626. Schmidt, F.L., Hunter, J.E., Croll, P.R. and McKenzie, R.C. (1983) Estimation of employment test validities by expert judgement. Journal of Applied Psychology, 68, 590–601. Schmitt, N., Gooding, R.Z., Noe, R.A. and Kirsch, M. (1984) Metaanalyses of validity studied published between 1964 and 1982 and the investigation of study characteristics. Personnel Psychology, 37, 407–422.
0470861630_19_ref01.fm Page 347 Tuesday, December 7, 2004 11:24 AM
REFERENCES
347
Schmitt, N., Schneider, J.R., Cohen, S.A. (1990) Factors affecting validity of a regionally administered assessment center. Personnel Psychology, 43, 1–12. Schmitt, N., Clause, C.S. and Pulakos, E.D. (1996) Subgroup differences associated with different measures of some common job-relevant constructs. In Cooper, C.L. and Robertson, I.T. (eds) International Review of Industrial and Organizational Psychology, vol. 11, pp. 115–139. John Wiley & Sons: Chichester. Schneider, B., Smith, D.B., Taylor, S. and Fleenor, J. (1998) Personality and organisation: a test of the homogeneity of personality hypothesis. Journal of Applied Psychology, 83, 462–470. Scholz, G. and Schuler, H. (1993) Das nomologische Netzwerk des Assessment Centers: eine Metaanalyse. Zeitschrift fur Arbeits- und Organisationspsychologie, 37, 73–85. Schuler, H. and Moser, K. (1995) Die validitat des multimodalen interviews. Zeitschrift fur Arbeits- und Organisationspsychologie, 39, 2–12. Scott, S.J. (1997) Graduate selection procedures and ethnic minority applicants. MSc thesis, University of East London. Scott, W.D. (1915) The scientific selection of salesmen. Advertising and Selling, 25, 5–6, 94–99. Smith, M. (1994) A theory of the validity of predictors in selection. Journal of Organisational and Occupational Psychology, 67, 13–31. Sparrow, J., Patrick, J., Spurgeon, P. and Barwell, F. (1982) The use of job component analysis and related aptitudes on personnel selection. Journal of Occupational Psychology, 55, 157–164. Springbett, B.M. (1958) Factors affecting the final decision in the employment interview. Canadian Journal of Psychology, 12, 13–22. Spychalski, A.C., Quinones, M.A., Gaugler, B.B. and Pohley, K. (1997) A survey of assessment center practices in organizations in the United States. Personnel Psychology, 50, 71–90. Stinglhamber, F., Vandenberghe, C. and Brancart, S. (1999) Les reactions des candidats envers les techniques de selection du personnel: une etude dans un contexte francophone. Travail Humain, 62, 347–361. Stokes, G.S., Hogan, J.B. and Snell, A.F. (1993) Comparability of incumbent and applicant samples for the development of biodata keys: the influence of social desirability. Personnel Psychology, 46, 739–762. Taylor, P.J. and Small, B. (2002) Asking applicants what they would do versus what they did do: a meta-analytic comparison of situational and past behaviour employment interview questions. Journal of Occupational and Organizational Psychology, 74, 277–294. Taylor, P., Mills, A. and O’Driscoll, M. (1993) Personnel selection methods used by New Zealand organisations and personnel consulting firms. New Zealand Journal of Psychology, 22, 19–31. Terpstra, D.E. and Rozell, E.J. (1993) The relationship of staffing practices to organizational level measures of performance. Personnel Psychology, 46, 27–48. Terpstra, D.E. and Rozell, E.J. (1997) Why potentially effective staffing practices are seldom used. Public Personnel Management, 26, 483–495. Terpstra, D.E., Mohamed, A.A. and Kethley, R.B. (1999) An analysis of Federal court cases involving nine selection devices. International Journal of Selection and Assessment, 7, 26–34. Tett, R.P., Jackson, D.N. and Rothstein, M. (1991) Personality measures as predictors of job performance. Personnel Psychology, 44, 407–421. Tiffin, J. (1943) Industrial Psychology. Prentice Hall: New York. Tziner, A. and Dolan, S. (1982) Evaluation of a traditional selection system in predicting success of females in officer training. Journal of Occupational Psychology, 55, 269–275. US Merit Systems Protection Board (2000) The Federal Workforce for the 21st Century: Results of the Merit Principles Survey 2000. US Merit Systems Protection Board: Washington, DC. Vernon, P.E. (1950) The validation of the Civil Service Selection Board procedures. Occupational Psychology, 24, 75–95. Wiesner, W.H. and Cronshaw, S.F. (1988) A meta-analytic investigation of the impact of interview format and degree of structure on the validity of the employment interview. Journal of Psychology, 61, 275–290.
0470861630_19_ref01.fm Page 348 Tuesday, December 7, 2004 11:24 AM
348
REFERENCES
Wilk, S.L. and Sackett, P.R. (1996) Longitudinal analysis of ability, job complexity fit and job change. Personnel Psychology, 49, 937–967. Williams, S.B. and Leavitt, H.J. (1947) Group opinion as a predictor of military leadership. Journal of Consulting Psychology, 11, 283–291. Williamson, L.G., Malos, S.B., Roehling, M.V. and Campion, M.A. (1997) Employment interview on trial: linking interview structure with litigation outcomes. Journal of Applied Psychology, 82, 900–912. Wilson, N.A.B. (1948) The work of the Civil Service Selection Board. Occupational Psychology, 22, 204–212. Woehr, D.J. and Arthur, W. (2003) The construct-related validity of assessment center ratings. Journal of Management, 29, 231–258. Wood, R. and Payne, T. (1998) Competency Based Recruitment and Selection. John Wiley & Sons: Chichester. Zazanis, M.M., Zaccaro, S.J. and Kilcullen, R.N. (2001) Identifying motivation and interpersonal performance using peer evaluations. Military Psychology, 13, 73–88. Zwerling, C., Ryan, J. and Orav, E.J. (1990) The efficacy of pre-employment drug screening for marijuana and cocaine in predicting employment outcome. Journal of the American Medical Association, 264, 2639–2643.
0470861630_20_ind01.fm Page 349 Thursday, December 2, 2004 8:35 PM
Index
0.30 barrier 73, 74 16PF 69, 70, 71, 73, 86, 89, 90, 91, 333, 338 360° feedback 126, 281, 287, 288, 289, 290, 291–3, 305, 308, 309, 322, 328 ABLE 78 Absence 4, 8, 76, 77, 78, 99, 113, 123, 133, 175, 196, 213, 240, 241, 332 Accent 185, 197, 206 Acceptability 8, 18, 21, 217, 241, 245, 250, 251, 252, 257, 288, 289 Accidents 108, 146, 240, 241, 332, 338 Accommodation 41, 42, 44 Accuracy in appraisal 300 Achievement tests 26, 48 Additive decision making 107 Admiralty Interview Board 170, 177 Adverse impact 1, 13, 16–18, 41, 47, 51, 60, 88, 99, 102, 105, 107, 116, 117, 151, 152, 159, 177, 178, 196, 200, 215, 216, 223, 228, 229, 232, 233, 238, 239, 243, 248–50, 253, 254, 304, 321, 323, 335 Advertisement 6, 7, 100 Affirmative Action 147 African Americans 61, 216, 233, 249 Afro-Caribbean British 89 Age 16, 18, 35, 36, 41, 61, 101, 103, 107, 108, 109, 115, 116, 118, 178, 185, 196, 206, 292, 304, 305, 306, 309, 323, 334 Agreeableness 71, 74, 75, 76, 77, 79, 80, 177, 199, 246, 247 AH4 38, 39, 40 Air force 112 Alcohol 5, 8, 77, 240 All-purpose management competences 150 Americans with Disabilities Act 241 Anonymity 291, 322 Anxiousness see Neuroticism Appearance 4, 5, 101, 185, 190, 197, 236, 240, 284
Application form 6, 8, 15, 22, 23, 34, 43, 64, 65, 95, 99, 100–2, 103, 107, 108, 109, 118, 131, 194, 207, 224, 225, 250 Aptitude 13, 23, 25, 26, 47, 49, 57, 58, 70, 152, 153, 258, 264 Aptitude batteries 49, 57, 79 Aptitude Index Battery 114, 117 Aptitude tests 48 Archive data 92 Armed Services Vocational Aptitude Battery 78 Army 54, 59, 60, 114, 116, 131, 135, 136, 159, 160 Asian British 89, 178 Assembly work 49, 79, 239 Assessment centres 7, 13, 18, 21, 22, 23, 52, 55, 64, 65, 87, 91, 95, 140, 159–83, 186, 189, 246, 247, 248, 249, 250, 251, 252, 253, 256, 272, 300, 303, 322, 323, 325, 326, 328, 330 Assessment policy 26 Assessors’ conference 164, 165 Assigned role 162 AT&T 160, 164, 167, 171, 178, 232, 237, 238 Attractiveness 197, 240, 306 Attributability 308 Background checks 123, 223, 224–5, 242, 246, 247 Base rate 330 Behaviour description interview 218 Behavioural consistency 106, 112, 230, 246, 247 Behavioural Observation Scale (BOS) 146, 293, 298, 299 Behavioural tests 95, 255 Behaviourally anchored rating scales (BARS) 106, 146, 194, 293, 296–8, 299 Belbin Team Role test 80 Benefits of testing 26 Bennett Mechanical Comprehension Test 49, 231
0470861630_20_ind01.fm Page 350 Thursday, December 2, 2004 8:35 PM
350 Bias 60, 105, 106, 136, 178, 185, 192, 206, 207, 236, 248, 281, 287, 292, 309, 332 in interviews 194–7 in performance appraisal 303–8 Big five 67, 70–1, 74, 76, 80, 91, 92, 303, 333 Biodata 7, 18, 21, 22, 23, 95, 99, 100, 108–20, 135, 189, 246, 247, 249, 250, 251, 252, 253, 262, 325, 331 Biofeedback 224, 242 Birkbeck Mechanical Comprehension Test 152 Board see Panel Bottom line 26, 220, 272, 294, 300, 316, 317 BPS test reviews 315 BPS website 313 Braille 41, 42 British Olympic Target Archery Squad 242 British Psychological Society (BPS) 25, 27, 43, 153, 279, 311, 312, 313, 314, 315, 316, 317, 323, 324 Brogden equation 261, 262 Bus driving 50, 69, 116 California Psychological Inventory 69, 71, 72, 88, 90, 91, 274 Call centre 36 Career advice 71, 103 Career counselling 5, 74, 277, 279, 280 Career development 271, 276 Career Profile System 114 Career success 67, 80, 81 Case studies 163 Certification 43, 311, 312, 313 Changes over time 36 Cheating 216 Checking for risk 123 Checklists 28, 30, 31, 32, 44, 123, 175, 207, 230, 293, 294 Child abuse 82 Child care 101, 118, 119, 200, 224, 228 Chinese British 89, 92, 178, 179 Civil Rights Acts 16, 62, 88 Civil service 123, 320 Civil Service Selection Board 126, 160, 166, 167, 177 Classic triad in selection 140 Clerical ability 4, 49 Clerical work 89, 113, 114, 130, 155, 230, 238 Cloning 73, 109, 155 Closed tests 60 Cluster analysis 147 Coaching 60, 86, 158, 192
INDEX Code of Practice (EOC) 41, 200 Codes of conduct 16, 41, 285, 320, 323 Command exercises 159, 162 Commendable behaviour 77 Commission for Racial Equality 17, 41, 105, 323 Commitment 26, 289, 337 Company testing policy 316 Competence analysis 89, 139–58, 259, 320, 326 Competence-based application 101 Competences 3, 4, 43, 100, 101, 121, 124, 125, 134, 139–58, 161, 162, 164, 172, 174, 175, 176, 180, 182, 186, 187, 197, 205, 208, 209, 211, 212, 215, 258, 259, 284, 296, 313, 314, 315, 317, 320, 325, 326 Competency-based interviewing 207 Competitiveness 211, 255, 320, 336 Complex decision making 107 Composites of selection tests 254 Comprehensive Structured Interview 208, 210 Computer interpretation 90, 319, 324, 331 Computer Programmer Aptitude Battery 49 Computerised testing 40, 50, 84 Confidentiality 34, 273, 327 Conscientiousness 10, 69, 71, 73, 74, 75, 76, 77, 78, 79, 80, 81, 90, 130, 149, 177, 191, 199, 246, 247, 249, 254, 302, 303, 320, 332, 338, 339 Consent 325 Consortium measures 114 Construct validity 12–13, 176, 214 Content validity 151, 257 Control keys 86 Controllable biodata 111 Convergent thinking 50, 152 Convergent validity 172 Core and fringe workers 337 Correction keys 87 Cost 8, 18, 19, 21, 41, 60, 98, 110, 115, 181, 211, 212, 225, 241, 245, 250, 251, 252, 253, 260, 261, 262, 297 Counselling in the workplace 271, 272, 276 Counselling skills 271, 272 Counterproductive behaviour 77 Creativity 47, 50, 64, 65, 99, 113, 156 Credit history 113, 116, 224, 325, 331, 333 Criminal record 17, 118, 224, 225, 242, 334 Criminal Records Bureau 118, 225 Criterion 11–12, 20, 62–3, 82, 171–2, 192, 193, 213, 239, 300 Critical incident technique 106, 139, 145–6, 147, 148, 208, 209, 296
0470861630_20_ind01.fm Page 351 Thursday, December 2, 2004 8:35 PM
INDEX Cross-validation 110 Culture free tests 50, 249 Customer service 75, 90, 156, 238, 246, 247, 252 Customers 20, 81, 104, 123, 150, 151, 172, 188, 219, 220, 224, 227, 287, 288, 290, 330, 331 Cut-offs 54, 88, 188 CV 34, 64, 65, 95, 99, 100, 101–2, 104, 106, 194, 207 Data protection 42, 321, 335 Data Protection Acts 42, 132, 242, 331, 335 Debriefing candidates 33 Deception 84, 228 Defence Mechanism Test 94, 246, 247 Defending selection tests 112, 151 Definition of workplace counselling 272 Dependability 76, 77, 131 Development centres 164, 175 Developmental appraisal 282, 283, 284, 288, 291, 303 Dexterity 4, 6, 49, 50, 150, 155, 236, 239, 240, 249, 252 Diaries 146 Dictionary of Occupational Titles 148 Differential Aptitude Tests (DAT) 49, 57, 58, 61, 64, 65, 152, 153, 259 Differential validity 243 Difficult situations during testing 32 Directed faking 84, 115, 119 Disability 16, 41–2, 61, 102, 104, 105, 106, 115, 116, 196, 200, 203, 239, 278, 304, 323, 333, 334 Disability Discrimination Act 16, 200 Disability Rights Commission 41 Disciplinary proceedings 77, 123, 278, 332 Discrimination 16, 17, 18, 42, 62, 88, 101, 105, 178, 200, 203, 239, 248, 278, 304, 306, 327, 333, 334, 335 Dishonesty 67, 71, 81, 82, 83 Divergent thinking 64, 65 Diversity 59, 60, 326 DNA 329, 332, 333 Domain validity 231 Downsizing 2, 59 Driving 3, 59, 81, 131, 224, 229, 232 Drug use 4, 21, 223, 224, 240, 241, 251, 334 Dyslexia 42, 49 Education 22, 35, 36, 49, 58, 103, 104, 106, 108, 114, 144, 153, 179, 200, 223, 224, 228–9, 246, 247, 249, 251, 253, 264, 277, 332, 334, 336 EEOC v. Sandia Corporation 306
351 Effectiveness 299 Effort 14, 56, 67, 77, 78, 79, 191, 253, 258, 302 Electro-dermal activity 242 Electronic application 103, 107 Electronic sifting 106 Eliteness 99, 112 Emotional intelligence 4, 13, 47, 50, 51–2, 96, 145, 246, 247, 274 Emotional stability see Neuroticism Emotionality see Neuroticism Empirical interview 208, 211, 212, 213, 217, 246, 247 Empirical keying 69, 91, 211 Empirical validation 256, 259 Employee involvement 290 Engineering work 106, 113, 114, 130, 237 Equal Employment Opportunities Commission (EEOC) 101, 151, 306 Equal opportunities 8, 16, 41, 101, 115, 116, 155, 198, 218, 322 Equal opportunities agencies 41 Equal Opportunities Commission (EOC) 41, 200, 335 Equal opportunities monitoring 334 Error of difference 71, 72 Error of measurement 52, 53, 62, 188 Estate agents 225 Ethics 2, 34, 84, 115, 192, 273, 319–28, 324 Ethnic differentiation 334, 335 Ethnicity 16–18, 35, 36, 41, 101, 102, 103, 104, 105, 106, 116, 125, 179, 185, 206, 237, 239, 241, 248, 250, 281, 299, 304, 305, 306, 323, 327, 330, 333, 334, 335 and ability tests 60–2 and assessment centres 177–8 and biodata 115–16 and interviews 196 and personality tests 88–9 and structured interview 216 Europe 1, 18, 21, 23, 41, 53, 60, 61, 80, 81, 88, 104, 122, 126, 186, 196, 250, 291, 336 Exercise x competence problem 172–6 Expectancy table 54, 55 Experience 22, 35, 94, 105, 106, 108, 114, 115, 133, 178, 189, 190, 196, 209, 210, 214, 216, 242, 250, 257, 272, 300, 311, 314, 315, 325, 334 Extraversion 10, 70, 71, 74, 75, 76, 77, 78, 79, 80, 86, 89, 92, 96, 97, 177, 199, 246, 247, 259, 332 Eye chart 152 Eysenck Personality Questionnaire 70, 92, 96, 97, 274
0470861630_20_ind01.fm Page 352 Thursday, December 2, 2004 8:35 PM
352 Face validity 177 Factor analysis 69–70, 91, 112, 147 Factory work 20, 113 Fair employment 4, 16, 59, 105, 185, 205, 215, 224, 235, 238, 239, 283, 303, 336 Fairness 1, 8, 16, 21, 26, 41, 115, 157, 177, 200, 205, 215, 232, 245, 248, 250, 253, 324, 327 Faking 14, 21, 67, 74, 89, 92, 93, 95, 99, 104, 119, 216, 217, 241, 330, 332 in personality inventories 84–8 in biodata 115 in interviews 191–3 False positives 83, 227, 330 Feedback 27, 30, 33, 34, 45, 126, 274, 275, 282, 287, 288, 301, 305, 314, 317, 321, 322, 331 Fire fighters 150, 258 First impression 195 Fit to the organisation 330 Fleishman Job Analysis Survey 148 Forced choice 68, 85, 86, 91, 130, 134, 293, 295, 296 Forced distribution 293, 294, 295 Four-fifths rule 17 Frame of Reference training 175 Fundamental attribution error 290 Gender 4, 16, 17, 18, 35, 36, 41, 67, 89, 101, 102, 103, 104, 105, 106, 114, 116, 118, 125, 168, 179, 185, 206, 223, 237, 250, 281, 290, 306, 323, 333, 334 Gender bias 195 in interviews 196 in performance appraisal 304–5 in structured interviews 215 Gender differences 238 in ability testing 60–1 in assessment centres 178 in personality testing 88 in physical tests 238–9, 243 Gender discrimination 320, 321 General Aptitude Test Battery 61, 150, 152, 153, 239, 240 General Information Pack (BPS) 313 General mental ability 25, 49, 58, 239, 300 General soldiering proficiency 78, 79 General vs. specific assessment 284 Global Personality Inventory 92, 330 Globalisation 330, 336 Goldberg 92, 95 Grade point average 22, 105, 190, 199 Grades 21, 36, 37, 40, 105, 135, 168, 228, 229, 251, 287, 305, 314
INDEX Graduate Managerial Assessment 37, 49, 64, 65, 182, 183 Graduate recruitment 23, 60, 178, 179, 233, 250, 335 Graphology 18, 23, 223, 225–7, 246, 247, 256 Green v. Missouri Pacific Railroad 17 Griggs v. Duke Power Co 17 Group discussions 95, 159, 160, 162, 163, 166, 170, 330 Group exercises 7, 34, 52, 160, 161, 162, 164, 165, 166, 171, 176, 180, 182, 255, 256, 325, 328, 331, 334 Group interviews 146 Guide to Pre-Employment Inquiries 101 Guidelines (EEOC) 17, 27, 254 Halo effect 189, 207, 235, 281, 302–3, 308, 309 Hard and soft HR 326, 327 Health 4, 84, 124, 128, 144, 200, 331 Height 4, 10, 17, 18, 61, 101, 110, 210, 239 Henley Managerial Competences 149 Heritability 332, 339 HGV driving 189 High Performance Organisations 156 Hightower et al. v. Disney 107 Hispanic Americans 196, 233 Hogan 92, 237 Honesty 23, 83, 95, 112, 128, 227, 246, 247, 249, 252, 253 Honesty testing 82 Hotel work 75, 207 HR managers 21, 217 Idiosyncrasy 105, 121, 127, 128, 130, 190, 198, 206, 287, 288, 289 Immunity laws 133 Impression management 14, 99, 191, 192, 206 Improving the assessment centre 170 Improving the interview 197–9 Improving the reference 130 In-basket see In-tray Income 48, 60, 81, 103, 179, 181, 324 Incremental validity 235, 253 and ability tests 55 and assessment centres 177 and biodata 113–14 and interviews 191 and personality tests 81 and structured interviews 214 and work sample test 231 defined 13–14
0470861630_20_ind01.fm Page 353 Thursday, December 2, 2004 8:35 PM
INDEX Individual Decision Making Form 265, 266, 267, 268, 269 Information overload 174 In-group 149, 281, 304, 307 In-tray exercise 161, 163, 165, 166, 173, 174, 176, 177, 182, 183, 223, 224, 233–5, 246, 247, 328, 334 Ingratiation 191, 300, 307 Innovation 101, 161, 165 Insurance sales 5, 50, 95, 108, 114, 117, 135, 336 Integrating information 254 Integrity 23, 71, 156, 314 Intellectual ability see Mental ability tests Intelligence see Mental ability tests Inter-rater reliability 9, 166, 187, 230 Interest inventories 27 Interests and values 3, 5, 6, 8, 15, 25, 74, 84, 96, 103, 119, 187, 190, 251, 264, 277, 306, 331, 333, 334 Internal reliability 70 International Test Commission 27, 323 Internet 51, 99, 103, 104, 109 Interpreting test scores 35 Interview 6, 7, 9, 11, 12, 13, 14, 18, 21, 22, 23, 48, 55, 84, 95, 100, 105, 118, 151, 160, 161, 163, 164, 165, 167, 168, 178, 185–204, 205–21, 232, 240, 246, 247, 248, 249, 250, 251, 252, 253, 275, 303, 311, 319, 321, 323, 325, 326, 328, 330, 331, 334, 335, 340 adverse impact 200–1 biases in 191–7 construct validity 199–200 reliability 187–8 uses of 8, 15 validity 188–91 ways of improving 197–9 see also Structured interviews Interview work sample 232 Interviewee inconsistency 193, 206 Interviewer inconsistency 192, 206, 215 Intrusion 67, 90, 111, 116, 203, 228, 241, 284, 319, 321, 325, 327, 335 IPIP 92 Ipsativity 171 IQ 9, 36, 37, 39, 52, 55, 59, 60, 339 Job analysis 41, 43, 89, 92, 104, 112, 139–58, 212, 231, 238, 239, 243, 250, 263, 313 Job Components Inventory 149 Job description 52, 77, 139, 140–3, 144, 151, 156, 186, 190, 307, 313, 337 Job fit 186
353 Job knowledge 47, 56, 102, 115, 210, 214, 246, 247, 248, 251, 252, 253, 334 Job knowledge tests 214 Job proficiency 59, 80, 82, 248, 253 Job relatedness 16–18, 19, 89, 101, 197, 200, 201, 208, 239, 252, 285, 321, 330, 335 Job-relevant 196, 333 Key competences 101, 143, 325 Key words 106, 107, 130 Keying 111, 112 Knowledge 3, 5, 6, 8, 15, 26, 49, 51, 56, 78, 145, 154, 187, 191, 214, 233, 249, 250, 251, 252, 257, 258, 272, 332, 334, 335 KSAs 104 Laboratory tests 92, 94 Lack of insight 89 Lateness see Punctuality Leaderless group discussion 159, 160, 162, 163, 164 Leadership 4, 56, 67, 69, 70, 76, 77, 78, 79, 102, 104, 135, 149, 156, 253, 284, 302, 304 Legal issues 2, 3, 5, 16–18, 21, 41, 42, 88, 107, 121, 123, 129, 139, 147, 159, 178, 196, 230, 248–50, 251, 320, 327, 329, 333, 336 and the polygraph 227 in biodata 115–17 in drug-use testing 240–1 in educational qualifications 229 in interviewing 200–1, 206, 215–16 in job applications 103–4 in performance appraisal 303–6 in physical tests 238–9 in references 131–3 in validation of tests 257 Leniency 121, 127, 128–9, 130, 131, 281, 288, 290, 293, 295, 299, 300, 301–2, 303, 306, 332 Letter of invitation 30 Level A Certificate 153, 314 Libel 129, 132, 133 Lie-detection scale 115 Liking 70, 185, 197, 281, 304, 306 Liking bias 197 Linearity 47, 54, 59, 60, 238 Listening skills 274 Long-term potential 283, 334 Low ability, people of 54, 334, 338 Lying 14, 160, 192, 193, 206, 227, 228
0470861630_20_ind01.fm Page 354 Thursday, December 2, 2004 8:35 PM
354 Malice 132 Management by objectives (MBO) 285, 286 Management Charter Initiative 144 Management Progress Study 160, 165, 167, 171 Managerial competences 144–5, 149 Managers, selecting 53, 73, 76, 80, 81, 85, 88 Managers and upward appraisal 289 Managers as assessors 163, 165, 168, 175, 176 Managers as counsellors 271, 272 Managers as performance appraisers 281, 283, 285, 286, 287–8, 291–3, 306–7 Managers’ self-assessments 236 Managers’ sifting practices 105, 117 Manufacturing 54, 178, 207, 262 Matrix 159, 161, 164, 182 Maximum performance 25, 315 Mean 14, 17, 35, 37, 38, 39, 40, 48, 56, 79, 132, 174, 202, 219, 228, 257, 300, 323 Mechanical comprehension 4, 49, 57, 236, 249 Mechanics 25, 36, 58, 230, 231, 233 Mental ability tests 4, 6, 7, 8, 21, 21–3, 25, 100, 105, 135, 159, 160, 161, 164, 165, 252–3, 323, 324, 325, 330 acceptability of 18, 251 administration of 26–33, 311–14, 317 adverse impact 17, 178, 248–51 as a universal in selection 155 and assessment centres 174, 176–7, 178 and competence analysis 152–3, 155 and content validation tests 258 and dexterity tests 239 error of measurement 53–4 incremental validity 78–9, 81, 113 interpreting scores 35–40 and interviews 186–7, 191–200, 214, 216 and in-tray exercises 235 low scorers 338–9 practicality of 252–3 problems in testing 60–1, 115, 176, 177, 191, 231 single or multiple 57–8 threshold and linear models 59–60 types of test 49–51, 332–3 uses of 13, 15 validity 53–6, 246–8, 251 and work sample tests 230, 232 Mental age 36 Method variance 90, 91, 92 Military 50, 56, 58, 76, 78, 79, 84, 94, 101, 113, 114, 127, 135, 136, 145, 162, 201, 232
INDEX Minimal references 132, 329 Mis-specification 174 Misplaced precision 39, 52, 53 MMPI 86, 90 MORE model 271, 273, 277 Motivation 27, 58, 99, 112, 127, 136, 163, 192, 193, 228, 253, 277, 285, 302, 332 Motivation Analysis Test 86 Moving in 273, 277 Multi-source appraisal see 360° feedback Multi-trait multi-method 91, 160 Multimodal interview 208, 210, 211 Multiple aptitudes 58 Myers Briggs Type Indicator 73, 274 Navy 39, 112, 135, 238, 255, 256, 257, 302 Negligent hiring 123 Negligent referral 133 NEO 71, 90, 92, 235, 259 Netherlands 23, 61, 71, 89, 186 Neuroticism 71, 74, 75, 76, 77, 79, 80, 81, 89, 92, 160, 177, 196, 199, 246, 247, 339 Non-delinquency 77 Non-substance abuse 77 Norm group 35, 279, 314 Normative comparisons 171 Normative score systems 36 Norms 26, 27, 31, 35–40, 41, 42, 44, 51, 61, 62, 69, 73, 88, 90, 91, 237, 314, 338 Numerical 38, 39, 40, 44, 49, 57, 61, 98, 165, 179, 182, 194, 218, 258, 259, 296 NVQs 144, 153 O*NET 148 Objective criterion 172, 300 Objectives 144, 281, 283, 285, 286, 295 Observation 146, 175 Occupational Personality Questionnaires 69, 70, 85, 91, 274 Office for Strategic Services (OSS) 160, 162, 326 Online application 51, 99, 100, 102, 103, 118 Opening out 273, 274, 277 Openness 71, 74, 75, 76, 77, 78, 79, 80, 81, 92, 145, 177, 199, 246, 247, 284 Organisational citizenship 67, 76, 77, 78, 114, 133, 190, 200, 253, 307, 337 Organisational fit 3, 5, 6, 8, 15, 187, 190, 251 Output 19, 20, 150, 172, 213, 228, 236, 238, 260, 281, 283, 285, 300, 324 Overall assessment rating 164 Own race bias 196, 305
0470861630_20_ind01.fm Page 355 Thursday, December 2, 2004 8:35 PM
INDEX Paedophiles 118, 119, 120, 227 Paired comparisons 135, 270, 293 Panel Group Data Form 263, 265, 266, 267, 268, 269, 270 Panel interview 45, 103, 126, 134, 164, 185, 188, 189, 190, 198, 205, 209, 211, 212, 217, 218, 258, 260, 270 Participation 116, 146 Past behaviour questions 111, 118, 209, 211, 230 Patterned Behaviour Description interview 207, 208, 209–10, 212, 214, 216, 220 Pay grading 152 Pay reviews 152 Peer assessment 18, 95, 121, 134–6, 163–4, 168, 172, 236, 246, 247, 248, 251, 252, 253, 288–9, 291, 292, 305, 309, 322 Peer nominations 135 Percentiles 36, 37, 38, 40, 61, 314 Perfect profile 73 Performance appraisal 2, 12, 14, 55, 63, 126, 129, 139, 146, 152, 281–309, 322, 332 Performance appraisal formats 293 Person specification 41, 64, 65, 95, 98, 139, 140–3, 144, 149, 186, 240, 313 Personal constructs 139, 146, 158 Personality inventories 7, 67–92, 102, 115, 153, 160, 211, 218, 242, 246, 247, 250, 253, 255, 256, 273, 274, 275, 278, 303, 328, 334 big five 70–1 career success 80–1 faking 84–8 incremental validity 81 legal problems 88–9 reliability 70, 71 teamwork 79–80 types of 68–70 uses in selection 72–6, 81–2 uses of 8, 15 Personality-related Position Requirement Form (PPRF) 148, 149 Personality screening 81–3 Personality tests 8, 13, 18, 21, 22, 23, 25, 51, 52, 55, 67, 98, 100, 111, 115, 118, 176, 214, 248, 250, 251, 252, 253, 254, 255, 311, 314, 315, 317, 323, 324, 325, 328, 335, 339 Personnel records 100, 134, 332 Physical Abilities Analysis 148, 237, 239 Physical ability 3, 8, 15, 223, 237, 238, 239, 251 Physical attractiveness 196, 240, 281, 304, 306 Physical characteristics 3, 4, 6, 187
355 Physical tests 8, 15, 224, 236–40, 243, 246, 247, 249 Pilots 53, 58, 90, 94, 113, 145, 209, 236, 241 Police 17, 18, 53, 75, 81, 136, 150, 160, 164, 168, 177, 224, 229, 238, 239, 241, 242, 261, 286, 335 Policy capturing 105, 107 Policy document 26, 35 Politics 90, 116, 198 Polygraph 82, 227, 242, 334 Pooled gender norms 88 Poor publicity 18 Popularity 136, 177, 189, 289 Position Analysis Questionnaire 139, 149, 150, 152, 153, 238, 239 Positive vetting 224 Postal workers 240, 241 Power balance 337 Practicality 232, 245, 246, 250, 251, 252 Practice questions 32 Present behaviour questions 111 Price Waterhouse Cranfield survey 23, 122, 185 Privacy 90, 116, 248, 250, 335 Private languages 129 Production 95, 113, 213 Productivity 20, 80, 155, 241, 271, 272, 273, 326, 337, 338 Profitability 245, 262, 326, 337 Projective tests 92–4, 160, 246, 247 Promotion 2, 99, 129, 135, 136, 139, 160, 163, 164, 168, 169, 171, 216, 230, 248, 258, 271, 276, 277, 281, 282, 283, 285, 288, 290, 291, 293, 294, 299, 301, 302, 306, 317, 327, 337 Protected minorities 16–18, 101, 104, 105, 115, 152, 281, 334 Psychological assessment 185, 188, 242, 272, 273, 276, 277 Psychological contract 337 Psychological Testing Centre 27, 315 Psychometric testing 23, 25, 26, 27, 30, 35, 42, 43, 140, 157, 218, 311, 312 Psychomotor tests 246, 247, 251, 252 Psychopaths 94, 95, 227 Psychoticism 70, 92 Public relations 5, 18, 34, 326 Punctuality 4, 123, 133, 134, 332 Pupil dilation 227, 228 Qualifications 43, 64, 65, 100, 122, 153, 189, 193, 229, 250, 316, 321, 330 Quality 13, 15, 22, 55, 106, 121, 122, 153, 180, 217, 237, 287
0470861630_20_ind01.fm Page 356 Thursday, December 2, 2004 8:35 PM
356 Quality of information 15 Quota hiring 16, 61, 62 Race see Ethnicity Race Relations Act 16 Range of value 260, 261, 262 Ranking 135, 293, 294 Rapport 30, 84, 201, 210, 273 Rating error 192, 194 Rating scales 121, 123, 135, 194, 199, 207, 217, 293, 294, 296 Rational estimates 260 Rational validity 257 Raven Progressive Matrices 36, 42, 152, 249 Raw score 35, 37, 38, 39, 40, 61, 62, 69 Re-framing 273, 275, 277, 278 Re-test reliability 9, 52, 91, 166 Realistic job preview 210, 230 Recorded 15, 123, 164, 211, 293, 306 Recording results 34 Recruitment 23, 51, 103, 104, 118, 156, 158, 186, 218, 261, 279, 327, 330 Recruitment agencies 23 References 6, 7, 9, 12, 15, 17, 21, 22, 23, 25, 34, 35, 64, 65, 95, 121–34, 135, 137, 142, 175, 242, 246, 247, 251, 252, 253, 303, 325, 329, 332, 334, 335 improving 130–1 legal aspects 131–3 reliability 126, 128 types of 123–5, 129 uses of 8, 15, 122–3 validity 126–9 Rehabilitation of Offenders Act 118 Relative Percentile Method 131, 134 Relax Plus 242 Reliability 1, 8, 9, 14, 26, 28, 41, 51, 52–3, 62, 64, 91, 121, 205, 230, 235, 247, 293, 320 assessment centres 166, 170 competence analysis 154 error of difference 71–2 inter-rater reliability 9 internal reliability 70 interviews 187–8 peer assessment re-test reliability 9 references 126, 128 sifting 102, 106 structured interview 206–7 work performance measures 63, 78, 192, 193, 285 Religion 16, 18, 41, 90, 110, 116, 334
INDEX Repertory grid technique 139, 146–7, 148, 156–8 Reputation 22, 123, 132, 137, 224, 287, 307, 308, 320 Required work behaviour 3, 4, 6, 8, 15, 251 Resentment 294, 301, 302, 304 Resilience 102, 124, 135, 154, 156, 188, 255, 256, 325 Resistance 217 Restricted range 62, 63, 192, 193, 248 Resumix 104, 106, 107 Retail 172, 178 Retirement 167, 277 Revealed difference technique 162, 180 Reverse discrimination 62 Reward 282, 283 Right of access 132, 225, 335 Role play 52, 160, 161, 163, 173, 255, 256, 326, 331 Rorschach 9, 93, 94 Salary 19, 80, 132, 150, 171, 200, 260, 261, 294, 306 Sales 20, 49, 50, 135, 146, 149, 195, 207, 262, 331 Sales as performance measure 172, 213, 225, 285, 286, 300 Sales staff 53, 75, 108–9, 113, 114, 117, 127, 189, 213 Sample size 179 Sampling error 62, 63 Satisfactory employees 300 Score banding 36, 40, 47, 62 Scoring tests 33 Screening 23, 43, 83, 102, 104, 105, 109, 112, 117, 118, 216, 224, 228, 241, 333, 334 Security risk 330 Selecting interviewers 198 Selection and Development Review 314, 315 Selection Panel Assessment Form 263, 264 Self-appraisal 290–1 Self-assessment 223, 224, 235–6, 246, 247 Selling insurance 108 Separate gender norms 61 Sex 115, 195, 327 Sex and Power survey 335 Sex Discrimination Act 16 Sexual abuse 133 Sexual orientation 18, 334 ‘Shelf life’ of test data 324 Sickness 95, 133, 134, 332 Sifting 6, 17, 99–120, 178, 246, 247, 251, 253, 321, 330, 333
0470861630_20_ind01.fm Page 357 Thursday, December 2, 2004 8:35 PM
INDEX Sign or a sample 225 Similarity bias 306 Simulations 18, 23, 162, 331 Situational Interview 207, 208–9, 210, 212, 214, 216, 219 Situational judgement tests 153 Skilled trades 74, 230 Skills of counselling 272, 273 Social class 58, 114 Social desirability 86, 92, 97, 98 Social exclusion 58, 104, 334 Social intelligence 4, 52 Social reality 308 Social skill 3, 6, 7, 8, 15, 55, 186, 187, 189, 199, 200, 214, 251 Soroka v. Dayton-Hudson 90, 116, 250 Spatial ability 57, 58, 151, 152 Special needs 30, 42, 321, 323 Spring v. Guardian Assurance 132 Standard deviation 37, 38, 39, 249 Standard scores 38 Standardisation 26, 27, 91 Stanine 40 Status change 168, 171 Sten score 36, 37, 38, 40, 69, 314 Stepping errors 34 Stereotypes 197, 304 Strength 4, 6, 17, 89, 122, 236–7, 238, 243, 250 Strengths and weaknesses 160, 282 Stress 5, 27, 99, 101, 151, 156, 169, 170, 227, 241, 242, 260, 302, 303, 319, 321, 325, 326, 327, 330 Structured interviews 7, 21, 55, 146, 151, 153, 189, 190, 196, 200, 201, 205–21, 246, 247, 249, 250, 252, 253, 262 Structured questionnaires 146 Structured references 123, 129, 134 Subordinates 126, 149, 172, 281, 283, 285, 287, 288, 289, 291, 292, 301, 305, 309, 322, 339 Subtle questions 84 Succession planning 151 Summarising review 12, 53, 134, 189, 232, 236 Supervisor ratings 20, 56, 113, 126, 135, 136, 168, 172, 193, 212, 213, 225, 231, 236, 292 Supervisory Profile Record 114 Synthetic validity 258 T & E ratings 105–6, 246, 247, 251 T scores 36, 37, 38, 39, 69, 72, 314 Tailored testing 50 Teaching 123, 130, 132, 189
357 Teams 4, 47, 52, 54, 67, 79, 80, 98, 155, 156, 163, 177, 286, 294, 307, 330 Technical developments 329, 330 Technical proficiency 56, 78, 79 Tele-working 338 Tenure 99, 113, 213 Terminally sick organisation 339 Test administration 25, 27, 28, 31, 43, 45, 311, 313 Test administration card 31 Test Administration Certificate 27, 42, 43, 45, 313 Test anxiety 332 Test interpretation 314 Test log 29, 32, 34 Test security 34, 140, 193 Test Taker’s Guide 313 Test User’s Guide 27 Tests of common sense 246, 247 Theft 4, 77, 82, 109, 112 Thematic Apperception Test (TAT) 93, 94 Threshold 59, 238, 339 Thurstone Perceptual Speed Test 152 Time limits 32, 42, 49, 51, 52, 183 Top-down quota 61, 62 TQM 156 Trade unions 224, 288, 337 Trainability tests 232, 240, 246, 247 Training 2, 3, 14, 19, 25–45, 58, 76, 78, 99, 105, 106, 113, 116, 126, 130, 135, 136, 139, 141, 142, 144, 145, 148, 149, 151, 153, 154, 160, 163, 164, 168, 169, 175, 185, 188, 190, 200, 231, 232, 236, 238, 242, 260, 264, 277, 281, 282, 284, 286, 297, 324, 325, 332, 337, 338, 339 Training for testing and assessment 311–17 Training grades 126, 136, 164 Training interviewers 188, 198 Training performance 78, 232 Training success 99, 113, 232 Transformational leadership 76, 77 Transparency 174, 176 Turnover 240, 241 Typical performance 25, 214, 315 Typing 6, 232, 235, 259 Ultra Mind 242 Underlying attributes 6 Unemployable minorities 338, 339 Unemployment 18, 104, 262, 338 Unethical practice see Ethics Unfairness 27, 177, 201, 215
0470861630_20_ind01.fm Page 358 Thursday, December 2, 2004 8:35 PM
358 Universals of selection 155 Upward appraisal 289 US Employment Service 61 Utility 260 Validation research 11–12, 62–4, 110–11, 256–9 Validity 187, 225, 228, 232, 236, 239, 242, 243, 250, 251, 252, 262, 320 assessment centres 167–72 biodata 112–14, 117 competence analysis 153–4 computer sifting 102, 106–7 construct validity 12–13 defined 11 emotional intelligence 52 fairness and validity trade-off 253–4 incremental validity 13–14 Internet tests 51 interviews 188–94, 198 job-relatedness 16–18 ‘league tables’ 245–8, 251 mental ability tests 53–6 peer assessment 134–6 personality tests 72–82, 87 physical tests 238 reasons for poor validity 14–15 references 126–9 self-assessment 236 structured interviews 212–14 work sample tests 230–2, 235 see also Content validity; Construct validity; Domain validity; Incremental validity; Rational validity; Synthetic validity
INDEX Verbal reasoning 6, 44, 49, 57, 61, 64, 65, 106, 218, 235, 300 Verified assessors 27, 313 Versatility 67, 112, 147, 156, 189, 245, 246, 250, 251, 252 Video recording 50, 146, 170, 176 Violent behaviour 4, 81, 94, 123, 133, 239, 330, 333, 335 Vocational guidance 151 Wade v. Mississippi Co-operative Extension Service 284 Wages 168, 169, 171, 336 War Office Selection Board 159, 160, 165 Wechsler Memory Scale 152 Weight 22, 57, 82, 101, 106, 110, 185, 197, 236, 250, 255, 331 Weighted application blanks 99, 108 Weighting 57, 58, 254, 255, 256, 263, 264 Work performance 1, 10, 11, 12, 14, 20, 47, 52, 53, 54, 55, 56, 57, 58, 59, 62, 63, 67, 73, 74, 75, 79, 80, 81, 87, 90, 99, 105, 121, 123, 124, 126, 133, 135, 136, 153, 167, 185, 187, 191, 192, 193, 213, 214, 223, 225, 228, 229, 236, 240, 241, 256, 258, 272, 281, 292, 296, 299, 300, 302, 303, 330 Work sample tests 6, 8, 15, 18, 21, 23, 55, 176, 223, 224, 229–35, 237, 240, 246, 247, 248, 249, 250, 251, 252, 253 Work skills 3, 6, 8, 15, 251 Workplace counselling 271, 272 Workplace culture 273 Z scores 36, 38, 39, 40