As I type this, I’m finishing up the first ninety or so pages of what would appear to be the most controversial social science book of all time. More on that book and it’s 20th anniversary later – for now I want to go over a topic of the book that sort of bugs me every time I reread it.
I’ve read a number of attempts at giving a workable definition for what is meant by the phrase “intelligence.” But as with attempts that people make at defining government, or contrasting it with the market economy, I just can’t seem to find anything free enough of semantic drawbacks to fully endorse. Here’s a sample of what’s out there:
“A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.” – from “Mainstream Science on Intelligence“
“The aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment.” –David Wechsler
“The ability to deal with cognitive complexity.” –Linda Gottfredson
“Goal-directed adaptive behavior.” –Sternberg & Salter
When discussions about intelligence or IQ come up, one of the most frequent charges made against either concept is the idea that you can’t measure what you can’t define. That’s actually not a bad formal argument at all, the remaining implied question is whether or not a workable definition of intelligence actually exists. Is intelligence (as well as IQ) really just a subjectively-imposed concept like beauty or emotion? Can we dismiss it as just another “social construct?”
For something that has such a tremendous amount of predictive power, the answer is definitely no. If IQ is “socially constructed” then so is time, temperature, weight, volume, or velocity by the same standard. Especially when we’re talking about something that is established to have a very real genetic property that can be measured in a lab environment. Still, with everything from Howard Gardner’s “multiple intelligences” to Richard Sternberg’s “triarchic theory” of intelligence, all the way to “Spearman’s g-factor,” it’s hard not to get confused about what’s being debated.
So without further ado, here’s how I would suggest defining intelligence – practically speaking – in a way that can both be measured objectively and also have predictive power in everyday societal contexts:
“Intelligence is the ability to comprehend novel circumstances to achieve objective mental goals. To have higher intelligence is to be able to work quicker with a greater variety or volume of information to finish mental tasks that have less room for error.”
Sounds pretty straightforward, but elaborating on the three terms I underlined helps to clarify things a bit:
“Ability“ can be defined by one or more of the following: the volume, variety, or velocity at which you can operate. More mental loading at once, different types of mental tasks (math, verbal, and spatial for instance), as well as how quickly you can solve problems are all relevant examples of ability that can be quantified. This picture below should give an idea of what volume of information looks like in the context of non-verbal intelligence. Clearly the problem on the right has a LOT more cognitive loading, and this has been used as part of a challenge for English students several years ago.
Apparently China lives up to the stereotype when it comes to non-verbal ability in the form of math and spatial tasks.
“Novel Circumstances/Information” would be anything you have to apply mental problem-solving skills to work with rather than rely on mere familiarity from past experience. Trivia questions aren’t in this category; those require little mental effort and instead depend on already being exposed to the correct answer at some previous point. On the other hand, suppose you had to solve a Raven’s Progressive Matrices problem like this one:
This is an easy sample with only a few consecutive variables to sort through.
Unless you’ve already seen this exact problem on Google Images somewhere (as I obviously have), you’re ability to pick the missing piece is entirely contingent on how well you can sort out three variables: the pattern of background lines from the top to middle to bottom rows, whether they are straight/diagonal/curved, and finally the left/right pattern of the bold shapes. If you’ve already memorized the answer to this exact problem (after you’ve correctly solved it of course) then this problem ceases to be a novel set of circumstances. The key with IQ tests is to have a high enough volume, variety, and complexity of problems so that they require as much mental loading as possible.
“Objective Mental Goals” are goals that are intrinsic to the task at hand and not something you interpret or impose. An example of the latter would be judging whether or not a painting was a decent piece of art or not – clearly it’s open to interpretation. However, a simple math problem like 2 + 2 has only one correct answer. It’s an objective task where the answer stems from the problem itself rather than only exist as something you subjectively impose. A problem can be objective even if it has more than one correct answer; for example: “Give two numbers which when multiplied equal 64.” Possible answers (using whole numbers) can include 8 x 8, 16 x 4, 32 x 2, and 64 x 1. Again, what determines if an answer is correct or not must stem from the problem itself and not be a matter of personal taste. Moreover, we’re talking about tasks that are mental in nature and don’t involve the need for excess physical performance. That’s not to say that intelligence isn’t used to achieve physical tasks, rather that subject of IQ tests is mental ability.
So now comes the big question: Do IQ tests actually have any kind of predictive validity that make scores on such tests useful information? Apparently the answer is a resounding yes. Nearly a century of data indicates that properly administered tests do tell use something of value for matters of employment. I could go on about education as well, but I suspect most readers agree that much of schooling these days has limited overlap with the real world of work. If anything, consider how effective the Armed Forces Qualification Test has been at helping the military pick better recruits, and what happened when they tried to recruit people who scored poorly on the test.
With all this in mind, let’s consider why IQ tests matter so much and why they continue to become more important. As the prevalence of technology becomes more unavoidable, we are increasingly living in an information-driven economy. Successful behavior is therefore becoming less open to interpretation now that so much of what we rely on are mathematical/mechanical things that have less room for error, and are far more sophisticated than mere agriculture.
Although an entire post could be devoted to the issue of whether or not IQ tests are “culturally biased” (go ahead, define culture for me), the proper response to that assertion is really more simple than that: Demonstrate that certain problem types on widely-used tests are “cultural” in nature (whatever that means) and create a test with better predictive validity.
IQ tests continue to matter more with each passing day now that the volume/variety/velocity of novel information that must be sorted to achieve objective mental goals is on the rise.
It doesn’t take a genius to figure that out, but it does take a humanities major to deny it altogether.