Back to the homepage—Hans-Georg Michna

Please see the chapter "Links" at the end of this page for other related articles.


The Definition of the IQ

Copyright © 2006-2023 Hans-Georg Michna.


Intelligence

What is it that the IQ test measures? Ostensibly it measures intelligence, for which we have several definitions. Even though intelligence tests differ in their scopes, the IQ is still the one personality trait, among more than 30 known to psychologists, that can be measured most precisely, reliably, and repeatably.

While the word intelligence is used for all kinds of abilities in popular language, in the field of psychology a much narrower definition is used. Perhaps the most commonly definition used in psychology is this:

Intelligence is the ability to act purposefully in unknown situations.

A very much simplified definition is:

Intelligence is the ability to think logically.

The word intelligence is often used in a very unsharp way in casual speech, among non-psychologists, and in popular literature. Attempts to water down the concept of intelligence abound, typical examples being "The Mismeasure of Man" by Stephen Jay Gould or the theory of multiple intelligences by Howard Gardner, partly because people fear to be measured and classified, and so any watering-down is always welcome.

Intelligence measures are good predictors of several social outcomes and are therefore valuable, for example, for psychologists, parents, job recruiters, and others. For more details on the meaning of measured intelligence, please see the link to "Intelligence: Knowns and Unknowns" at the end of this page.

Intelligence Tests

Typical intelligence tests, or IQ tests, consist of a relatively large number of small problems with varying complexity, often purely symbolic problems, each of which requires the mental combination of several aspects of the problem.

Wider IQ tests can contain mathematical, geometrical, verbal and other problems. The results of different tests are different for the same person, but not widely so, because humans that do well in one area tend to do similarly well in other areas.

Intelligence Quotient

In the early development of intelligence testing the IQ was defined as the quotient:

IQ = intelligence age / chronological age * 100

For example, a child with an IQ test result like that of an average 13-year-old, but who is, in fact, 10 years old, has an IQ of 130 (13 / 10 * 100).

This historic definition worked only for children and yielded roughly a Gaussian distribution with an average of 100 (by definition) and with a standard deviation of 15. Sometimes other standard deviations are used, for example, 16.

Note that this old definition of the IQ says nothing directly about actual brain performance. This characteristic has been retained in the modern definition of the IQ. It only says something about the development speed of intelligence in children.

Later, to accommodate adults as well, this definition was given up and replaced by a straight projection of the measured rank on the Gaussian bell curve with a center value (average IQ) of 100, usually with a standard deviation of 15, but some other standard deviations, like 16, are used as well in some tests. Rank is replaced by the quantile, also called percentile when expressed in percent, because it measures the same thing, but is independent of the sample size.

This projection ignores the fact that the age ratio IQ is not perfectly Gaussian, particularly not in the far wings of the bell curve. The modern percentile-based IQ is thus perfectly Gauss-distributed by definition.

The 1916, 1937, 1960, and 1972 editions of the Stanford-Binet still utilized the concept of mental age (intelligence age), while the fifth edition (fourth revision) began to employ percentile rankings, which are then converted to an equivalent IQ score, called a "deviation IQ". To do this, the percentiles are converted to IQs by projecting them onto the Gaussian distribution.

Some values on the basis of average = 100 and standard deviation = 15:

Of course, giving the IQ with two decimals makes little sense in practice, because IQ tests are not that accurate. The figures are given here only to illustrate the precise mathematical translation from percentile to IQ.

The IQ is therefore a peculiar measure that is frequently misunderstood. The IQ tests primarily measure the percentile. The IQ is only an arbitrarily derived value, whose definition is loosely based on its historic roots.

The IQ is age-corrected; i.e., strictly speaking, the IQ is determined separately for each age group. This also means that, barring disease or trauma, the IQ remains roughly constant throughout life.

Intelligence in the sense of brain performance rises during childhood, until it reaches a peak around age 20 to 25, then slowly declines; but since the IQ is derived from the rank within the age group, which remains more or less constant, the IQ also does not change much.

Some misconceptions about the IQ

In truth the IQ is a purely statistical measure. It has no direct relation to brain performance, is not proportional to it, and doesn't even have any linear or otherwise straightforward relation to it. The only thing you can say is that somebody with a higher IQ will show higher scores on most other brain performance tests as well, but the IQ doesn't say how much higher.

In fact, when you compare the IQ with the raw test scores of most IQ tests, you will find that the IQ tends to "underread" brain performance, if you assume, for example, the number of test questions correctly answered as a direct measure of brain performance.

This statement is true for the now obsolete mental-age-based IQ definition for children. It is not true for the modern percentile-based IQ for adults and children.

Surprisingly many people are confused by this, even psychologists (as you can witness by reading their papers), but the mathematics are perfectly clear. In truth the modern percentile-based IQ distribution is perfectly Gaussian by definition, as explained above. Only the exact position of any individual on that distribution remains somewhat uncertain, because the accuracy of IQ tests has limits, but that is a different question.

Whenever there are any difficulties to agree on the mathematical definition of IQ, the first thing to do is to stop mentioning the IQ altogether and instead use the percentile only, because that cannot be argued, and the IQ is merely a mathematical translation of it. So before you express any doubts, express them without mentioning the IQ, only using the percentile, then think again.

Articles abound where somebody tests groups of people (like gifted children) and finds more or fewer individuals than the Gauss distribution would prescribe. When the above is understood, it is clear that this is, by definition, an error or an inaccuracy in the IQ test. Such errors are common in the outer wings of the Gauss curve, because there the tests are particularly difficult to calibrate. They are also meaningless, because almost all conceivable direct measures of brain performance, (for example, the already mentioned raw IQ test scores, like number of questions correctly answered) would not be anything like Gauss-distributed anyway. In fact, their distribution depends mostly on the arbitrary test design. In other words, you could design an intelligence test to yield almost any desired distribution.


Links


Back to the homepage—Hans-Georg Michna

hits since 2007-11-01
Free PHP scripts by PHPJunkYard.com