Does Intelligence Float?

Einstein once said that a theory should be as simple as possible, but no simpler.  Similarly I recently remarked one’s actions should be as noble as possible, but no nobler.  Implicit in these statements are constraints: that a theory should be supported by evidence, and that actions should be feasible.  Sure you can find simpler theories that conflict strongly with evidence, or actions that look nobler if you ignore important real world constraints.  But that way leads to ruin. 

Similarly I’d say one should reason only as abstractly as possible, with the implicit constraint being that one should know what one is talking about.  I often complain about people who have little tolerance or ability to reason abstractly.  For example, doctors tend to be great at remembering details of similar cases but lousy at abstract reasoning.  But honestly I get equally bothered by folks who trade too easily in "floating abstractions," i.e., concepts whose meaning is prohibitively hard to infer from usage, such as when most usage refers to other floating abstractions.

For example, most uses I’ve seen of "proletariat" or "exploitation" seem like floating abstractions to me, though within particular communities these concepts may have a clearer meaning.  Now of course most any well defined abstraction might seem like it floats to those who haven’t absorbed the right expert explanation.  But if there is no clear meaning, even to experts, then the concept basically floats.

Now there are communities who say their concepts acquire clear enough meanings after one has absorbed decades of readings, even though experts can’t really summarize those meanings any better than to tell you to read for decades.  But even if they are right, that way also seems to me to lead to ruin.  The intellectual progress I see comes mostly from the modularity that becomes possible with clearer meanings.  But that is a physicist/economist/compsci guy speaking – you may hear different from others.

Eliezer has just raised the issue of how to define "intelligence", a concept he clearly wants to apply to a very wide range of possible systems.  He wants a quantitative concept that is "not parochial to humans," applies to systems with very "different utility functions," and that summarizes the system’s performance over a broad "not … narrow problem domain."  My main response is to note that this may just not be possible.  I have no objection to looking, but it is not obvious that there is any such useful broadly-applicable "intelligence" concept. 

We agree "intelligence" is clearly meaningful for humans today.  When we give problems to isolated well-fed sane humans a single dominant factor stands out to explain variation, and that same factor also helps to explain variation in human success in the wider world.  But it is far from the only factor that explains variation in human success.  For that we tend to think in terms of production functions where IQ is just one relevant factor. 

In the computer world we clearly have a useful distinction between hardware and software, and we have many useful concepts to distinguish software, but "intelligence" is not really among our best concepts there.  I’d say it is just an open question how much more widely "intelligence" can be meaningfully applies. 

If your goal is to predict our future over the next century or so, then the question is what are the most useful abstractions for reasoning about the long term evolution of systems like our world now.  The obvious candidates here would be the abstractions that biologists find useful for reasoning about the long term evolution of ecosystems, or more plausibly the abstractions that economists find useful for reasoning about the long term evolution of economies. 

"Intelligence" has so far not been central to these concept sets, but of course these frameworks remain open to improvement.  So the question is: can one formulate a clearer more broadly applicable concept of intelligence, and then use it to improve the frameworks we use to think about the long term evolution of economies or ecologies?  This may well be possible, but it has surely not yet been demonstrated.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://www.iSteve.blogspot.com Steve Sailer

    You should read psychometricians. They have have spent decades studying this question (using real data!). They have come to the consensus that the old joke is right: intelligence is what intelligence tests measure. That sounds like a put-down of the concept of IQ, but it’s actually a profound compliment. It means that whatever verbal shorthand description of intelligence you come up with will be pretty good, yet inadequate. And, yet, that doesn’t really matter because virtually all cognitive skills are positively correlated.

  • Snowdon

    I think he’s trying to be too broad with his definition, maybe trying to lump intelligence, rationality, creativity, and efficiency all into one concept – which is not at all a useful definition. By the way, none of these things necessarily come in conjunction – except possibly creativity and efficiency.

  • http://dfranke.us Daniel Franke

    Now there are communities who say their concepts acquire clear enough meanings after one has absorbed decades of readings, even though experts can’t really summarize those meanings any better than to tell you to read for decades. But even if they are right, that way also seems to me to lead to ruin.

    *cough* postmodernists *cough*

  • Tim Tyler

    Virtually all cognitive skills are positively correlated – in humans.

    We are talking about testing machines here, where that may not be true.

    Will the same selection pressures that produced one “general intelligence” in humans also act on machines? Probably not – if we retain specialisation and division of labour. But “probably” – if we have one big superintelligence.

  • Stuart Armstrong

    Eliezer’s definition of intelligence is like Bayesian reasoning itself: a general overall ideal, that may or may not be practical for specific problems.

    It doesn’t capture much about division of labour, nor about the fact that real world “domains of expertise” are distributed accroding to a specific pattern (see Eliezer’s example of Deep Blue; how Kasparov could win by just kicking the computer. But Deep Blue plus a dumb human guard with a machine gun will defeat most of Kasparov’s “creative” strategies – and Eliezer’s definition gives no help towards knowing this fact). So I’d say “no” to using Eliezer’s definition to answer:

    can one formulate a clearer more broadly applicable concept of intelligence, and then use it to improve the frameworks we use to think about the long term evolution of economies or ecologies?

    …at least up until the moment when non-human AI’s become important. But your question seems a bit odd to me; entropy , for instance, does not help us think about the evolution of economies and ecologies, but no-one argues it’s a useless concept. Lots of scientific results are useful, without meeting those strict criteria.

  • kræmer

    Daniel Franke:

    *cough* mathematicians *cough*

  • Tim Tyler

    entropy, for instance, does not help us think about the evolution of economies and ecologies

    A rather unfortunate example, if – like me – you think that entropy maximisation is the driving force behind all biological systems.

  • http://hanson.gmu.edu Robin Hanson

    Steve, for humans the key question is why such a single powerful factor explains most variation. It might not be anything deep about thinking at all, but just something about a single big factor that makes human brain chemistry vary.

    Stuart, I just happen to know Eliezer wants to use a concept like intelligence think about our long term future.

  • Will Pearson

    entropy , for instance, does not help us think about the evolution of economies and ecologies

    As well as Tim Tyler, there is the widespread idea of trophic levels in ecology. This reasoning behind this is implicitly predicated on thermodynamics.

  • http://drchip.wordpress.com/ retired urologist

    RH says: doctors tend to be great at remembering details of similar cases but lousy at abstract reasoning.

    There are 301,270 medical doctors in the US. How did you come to this “unbiased” conclusion about them? Do people of a certain intellectual level who become doctors lose their abstract reasoning abilities, or does the medical profession attract a set of people with that same intellectual level who lack abstract reasoning skills to begin with? Are the secrets to this, and other, personal intellectual traits of doctors to be found in the study of medical economics? How does this relate to your quest for “overcoming bias”?

  • http://hanson.gmu.edu Robin Hanson

    retired, the fact that you think the exact number of docs is relevant to whether docs tend to be good at detail but bad at abstract reasoning illustrates my point.

  • jb

    Personally, I see intelligence divided into three parts. Given a problem X, we have:

    a) a body of knowledge (the domain of facts and concepts)
    b) a creative ability (to apply various concepts in the domain to see if they solve the problem)
    c) a logic/extrapolation function (to test the application of the concept to the problem)

    Generally, given X, you start with (A), apply (B) to come up with Candidate Solution Z, test with (C). If Z works, you go with it. If Z almost works, you apply (B) to it for a while to see if you can tweak it, and then reapply (C). If you can’t get Z to work, you go back to (A) and attempt to find another Candidate Solution, etc.

    Ecologies don’t have (A) – there’s no ability to draw new genes out of other pools. It does have (B), in the sense of random mutation, and it has (C) in the sense of successful mating and childbearing. I don’t see how we’d ever be able to understand the long term patterns of ecologies, since the (B) function is essentially random, and only concerned with genetic fitness over lifespan-length intervals. In other words, you can’t simulate an ecology over long periods of time because you don’t know what set random mutations will occur. All you could do is simulate a bunch of possible outcomes, but they’d all be essentially equally likely.

    Markets and economies have all three, and, as an amalgam, have a far larger (A), and a far better (B) than any given human. However, they have a lousy (C), in the sense that once a candidate solution is applied, it requires significant Real Time to determine if (C) is correct.

    Economies would be far more efficient if they could run (C) in chip time, instead of Real Time. That is to say, if they could build a reasonable simulation of the world with all the necessary monentary flows and models accounted for, and let it run for a while with Candidate Solution Z, they could find out what the likely long term results would be in seconds, instead of years.

    In some sense, it is possible that the world we are living in right now is simply a case study simulation of “what happens if we drop lending standards to X and overall credit flows at a maximum of rate Y”. For all we know, the executor of the simulation is looking in horror at our collapsing economy and saying to himself “Yeah, we probably need to make sure we keep our lending standards high”. (although he’s probably running the simulation several million times, so we’re just a series of data points in some final graph)

    Deep Blue runs a great simulation, over a very, very small ‘universe’. Computational power allows that simulation to expand and include more pieces, more squares, and more possible ‘moves’. Over time, competition will become the domain of 1) ‘who can build the more accurate simulation’ 2) ‘who can come up with more interesting questions/problems to solve’ and 3) ‘who can get the simulations to run over shorter periods of time’.

    And of course, I’m applying the ‘use a simulation’ solution to this problem, because I can’t come up with anything better. There’s no reason to believe that a better solution isn’t out there, I’m either just too ignorant to have it in my domain, too uncreative to see how it applies, or too logically inept to apply it properly.

    I’m sure I’m just bumbling through your uber-elite abstract reasoning china shop, so I’ll shut up now 🙂 However it’s been an interesting exercise for me, so thanks for your question.

  • Cyan

    retired, the fact that you think the exact number of docs is relevant to whether docs tend to be good at detail but bad at abstract reasoning illustrates my point.

    Glib and unfair. Not worthy.

  • Will Pearson

    Also to get back to the original question, I would plug increasing intelligence into economic models as increasing the rate of efficiency improvements.

    This can of course lead to radically different societies as different things become plausible that were too inefficient to be practical (fusion/space elevators currently have an efficiency of 0).

  • Roga

    I second Cyan. Your statement in response to retired is retarded. It only points out your parochial perspective and unwillingness to revise a biased and generalized belief of yours. (and no, I am not a doctor)

  • Roga

    I second Cyan. [dangling italics tag deleted along with enclosing flame — EY]

  • PK

    Stop flaming please.

  • Cyan

    Closing that dangling tag.

  • http://www.physics.ucsb.edu/People/person.php3?userid=mike Mike

    Attempting once again to close that dangling tag.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Closed Roga’s tag and, as long as I was there, deleted Roga’s flame. I wouldn’t ordinarily do that on one of Robin’s posts but it seemed like a really ominous sign.

    Please note that you can’t close tags except by editing the original comment. Typepad will let you do a dangling open tag in a single comment, but not a dangling close tag, apparently.

  • Jeremy

    Is this the same Robin who patiently explains to us that “anecdotal evidence” is an oxymoron?

    Please retract your comment about the abstract reasoning abilities of medical doctors or provide some evidence for it.

  • http://occludedsun.wordpress.com Caledonian

    And, yet, that doesn’t really matter because virtually all cognitive skills are positively correlated.

    Mr. Sailer, that is the most incorrect thing I have ever seen you write.

    How did you come to this “unbiased” conclusion about them?

    retired urologist, you’re reacting defensively to a perceived slight. No informed person would dispute the original claim. Doctors are in fact extraordinarily bad at performing abstract operations when the subject matter is medicine. The studies showing that doctors do not know how to correctly evaluate the probability of illness given a positive test result demonstrates that beyond any reasonable doubt.

    Part of the problem is that doctors are trained not to think about what they’re doing and why they’re doing it. You cannot try to do so and make it through medical school; sipping from the firehose isn’t an option.

  • http://occludedsun.wordpress.com Caledonian

    Anyone who objects to a statement about the statistical properties of a group by talking about the individual members of that group is guilty of fallacious argument.

    Robin Hanson’s comment is neither glib nor unfair. It is in fact extraordinarily generous, given how profoundly foolish retired urologist is being. Cyan and those who agreed with him, you owe Hanson an apology. retired urologist, you owe all of us an apology.

  • http://drchip.wordpress.com/ retired urologist

    Caledonian: perhaps the joke’s on me. I failed to understand what Hanson meant by “abstract reasoning”. Your explanation makes it much clearer, referring to such studies as Yudkowsky quotes in his “Intuitive Explanation of Bayesian Reasoning”. Certainly, I have no argument with the concept that medical doctors are not (usually) the sharpest mathematicians. Yet there is a difference between being “trained not to think” and “not being trained to think”. The former does not occur in any medical school I know about, while the latter is widespread. And there are many types of abstract reasoning other than probabilities. Perhaps use of the term “abstract reasoning” in Hanson’s illustration is an example of a concept whose meaning is prohibitively hard to infer from usage that offends him so.

    I withdraw my defensive posture.

  • http://occludedsun.wordpress.com Caledonian

    Yet there is a difference between being “trained not to think” and “not being trained to think”. The former does not occur in any medical school I know about, while the latter is widespread.

    Ha!

    Medical students are forced to memorized and regurgitate tremendous amounts of information. Then they are put in a situation in which they are 1) responsible for identifying, correctly diagnosing, and properly treating symptoms, 2) obligated to see a great many patients, and 3) legally responsible for any negative consequences resulting from diaagnosis and treatment that does not match standard practice (as well as being legally vulnerable for offering nonstandard practice regardless of the outcome) while at the same time being unassailable for negative consequences that result from generally-accepted practice.

    Doctors thus have nothing to gain from questioning and challenging the status quo. People who value and enjoy such generally do not become physicians — and when they do, they tend to go into research rather than treatment.

    Most doctors are highly-trained technicians. They are not experimenters, investigators, or even rationalists. They are, in actuality, extremely irrational. And a great deal of their function consists of authoritative pronouncement, convincing people that they know what is wrong and how to fix it. Actually knowing what if anything is wrong and how to fix it isn’t required.

    Which is why it took so long for medicine to become scientific, why so much of it still isn’t scientific, and for a long time you’d be better off with no treatment at all than going to a doctor. Even now, it’s not clear that they do more good than harm across all interventions.

    Withdrawing your counterattack is not sufficient. Expose your belly and say ‘uncle’.

  • Cyan

    Caledonian, Robin Hanson’s position could be correct in every particular and retired urologist’s position could be utterly false, and still Robin’s reply is glib and unfair. Anyone who supports a statement about the statistical properties of a group by talking about the individual members of that group is guilty of fallacious argument.

  • http://occludedsun.wordpress.com Caledonian

    Hanson said urologist’s behavior illustrated his point.

    Do you know what “illustrating a point” means, Cyan? Because that phrase has a generally-accepted meaning in English. A meaning that reduces your complaint to a non sequitur.

    You still owe Hanson an apology. Now you owe me one as well.

  • Jeremy

    I’m afraid you still haven’t provided any evidence for a lack of ability for abstract reasoning. You have an argument that is mildly persuasive, but please refer to the specific study you are basing this on. It would help if the study defines and uses the term “abstract reasoning”. A study limited only to capability of calculating probabilities would be left wanting unless you’ve also got a study showing a strong correlation between that capability and other useful measures of intelligence or abstract reasoning ability.

  • http://yudkowsky.net/ Eliezer Yudkowsky

    Um, Caledonian is pretty obviously trying to keep the flamewar going after Retired shut it down. Don’t fall for it, please.

  • http://silasx.blogspot.com Silas

    Sorry, I’m going to fall for it, because retired_urologist keeps proving Robin_Hanson right. First of all, when doctors fail to get the Bayesian inference problem right, the problem is *not* that they are bad “mathematicians” or that they weren’t taught how to do *that specific problem*. The problem is that they didn’t recognize the applicability of certain abstract principles, to that particualr problem. Which is — as Caledonian argued — a failure to think abstractly. A failure, like in Eliezer_Yudkowsky’s parable, to be able to say, “Hey, that counting-stones thing — it works for sheep too!”

    (In fact, it would be weaker evidence of abstract thought if you gave doctors that problem after specifically showing them how to format such problems.)

    So when retired_urologist looks at that evidence, and all he sees is “doctors being tested on something they weren’t taught”, that is in fact (and quite self-referentially) another case inability to abstract.

    Robin_Hanson may have been rude to call him out like that, but I must confess he and Caledonian are correct :-/

  • Stuart Armstrong

    entropy , for instance, does not help us think about the evolution of economies and ecologies

    As well as Tim Tyler, there is the widespread idea of trophic levels in ecology. This reasoning behind this is implicitly predicated on thermodynamics.

    My bad; I was trying to make a simple point that “entropy always goes up in a closed system” is not relevant to a non-closed system. My point overshot simplicity, and entered stupidity, and was wrong.

  • Johnicholas

    Um, does Marcus Hutter’s AIXI definition of intelligence float? http://www.hutter1.net/ai/iors.pdf

    It violates some of Eliezer Yudkowsky’s requirements, but I think it offers hope that a suitable formalization could be found that satisfies those requirements.

  • Jeremy

    Silas I’m afraid only evidence will provide Robin right. You still have not shared any.

    It appears no one has any. I am not intending this to be a flamewar, but only challenging unsubstantiated assertions. Sorry if that is not appropriate for a blog called Overcoming Bias.

  • Douglas Knight

    Please note that you can’t close tags except by editing the original comment

    The precise statement is that it depends on the web browser. It does improve the display from Safari. I think that there was a change between Firefox 2 & 3, so people have habits from 2 that are no longer useful.

  • http://occludedsun.wordpress.com Caledonian

    Silas, you’ve made my entire weekend. You have no idea what a pleasure it is for me to be defended rationally; the only thing better is to be attacked rationally, but that wouldn’t have been possible here.

  • Pingback: 530. Economic Definition of Intelligence? – 532. Mundane Magic | wallowinmaya