Although it is fashionable to denigrate conventional intelligence, or at least to doubt that it is capable of being measured accurately, there is a concomitant willingness to believe that multiple intelligence or emotional intelligence are the very real answers to all questions about job performance.
Multiple intelligence is a marketing triumph: it is the sketch of an idea, vague, hard to operationalize, difficult to test, and largely without any empirical support, and thus very popular, particularly among educationalists. It always gets a mention in psychology text books, on the basis that any comforting notion deserves favour. Lack real intelligence? Compensate with multiple intelligence! Since we do not have reliable measurements there is little more we can say about it.
Emotional intelligence is another marketing triumph: it conflates personality with the perception of emotion, the latter being difficult but possible to test, and has some empirical support. The proponents have worked hard to create a psychometric test and to collect data. They point out that they are not just testing personality again, but working on the specific issue of assessing whether there is a specific mental skill involved in understanding the emotions of others. There are such skills.
The big question is: what do these fashionable assessments add to the tried, tested, and widely validated measures of general mental ability? To answer this question we need to consult the Oracle: Jack Hunter, Frank Schmidt and co-workers.
The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 95 Years of Research Findings. (see below)
We also need to look at some methodological issues, and one source is: Oh, I-S., Postlethwaite, B. E., & Schmidt, F. L. (2013). Re-thinking the validity of unstructured interviews: Implications of recent developments in meta-analysis. In D. J. Svyantek, & K. Mahoney (Eds.), Received wisdom, kernels of truth, and boundary conditions in organizational studies. Charlotte, NC: Information Age Publishing. Pp 297 – 329.
This group have established a reputation for careful and very detailed work, such that their procedures have set the standards of best practice. In that spirit, here are some problems with using job selection tests to predict ability to do the job. Those who have already got the job in question are brighter than the applicants, and the incumbents not only have a higher mean but a small standard deviation. They are The Right Stuff, selected within a narrow band of capability (small standard deviation), whereas applicants have a broader range of capability. The predictive power of the test is weakened by this biasing effect of range restriction, and needs to be corrected by disattenuation, and the technique must cover direct and indirect effects.
For example, those who are offered the job probably tick all the boxes: bright, personable, and with experience. Using only a cutoff score on a mental ability test gets one of the criteria, but misses the other two. The correction for direct effects misses some indirect effects. Furthermore, the brightest and best candidates may get several offers. They may turn down the job in question for a better one, which complicates a simple analysis because a very bright candidate will be registered in the records as “did not get appointed to job” simply because they took up a better option elsewhere.
A technically advanced IRR solution, tested against Monte Carlo simulations provides the best estimate (Hunter et al. 2006) and shows that general mental ability is a better predictor than formerly stated by about 25% in that correlations rise from 0.51 to 0.68.
Hunter, J.E., Schmidt, F.L., & Le, H. (2006). Implications of direct and indirect range restriction for meta-analysis methods and findings. Journal of Applied Psychology, 91, 594 – 612.
Two little asides here: having been properly selected for general mental ability those who have a particular job will not be very easily distinguishable one from another on the basis of a general intelligence test. Intelligence will seem to have “disappeared” because it was the basis of selection, and all that will be left visible will be personality and experience differences. Second observation: job applicants self-select to some degree, so the standard deviation and mean of applicants will probably be higher than that of the general population, and not as low and wide as that general population, so another source of restriction of range, and possible cause of under-estimation of the predictive power of intelligence will have been missed.
Anyway, let us look at a talk given by Frank Schmidt on selection methods for job performance.
Start at the top to find out what works, or at the bottom to see what doesn’t. General Mental Ability is king of the castle, and all else lies in its shadow. Tests of integrity make useful contributions, as do employment interviews, measures of conscientiousness, checks of references, background data and job experience. In the 1% gain category are Years of Education, Interests, Emotional Intelligence, Grade Point Average and Organisation Fit.
If you want to make valid choices without wasting too much time and effort, avoid overlapping predictors and go for those that make useful independent contributions, as shown below:
Remember, a weak predictor can still make a contribution if it provides something not covered by the generally more powerful ones. However, emotional intelligence is not on the list.
Jobs are given to those who can show ability, honesty, conscientiousness, proper references, education and job experience, the respect of their peers, and emotional stability. According to your tastes this is very dull, or highly reassuring.
Now let me tell you about the multiplicity of my gastro-intestinal intelligence.