Monthly Archives: October 2008

Mundane Magic

Followup toJoy in the Merely Real, Joy in Discovery, If You Demand Magic, Magic Won’t Help

As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe – a universe containing no ontologically basic mental things such as souls or magic – and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

There’s an old trick for combating dukkha where you make a list of things you’re grateful for, like a roof over your head.

So why not make a list of abilities you have that would be amazingly cool if they were magic, or if only a few chosen individuals had them?

For example, suppose that instead of one eye, you possessed a magical second eye embedded in your forehead.  And this second eye enabled you to see into the third dimension – so that you could somehow tell how far away things were – where an ordinary eye would see only a two-dimensional shadow of the true world.  Only the possessors of this ability can accurately aim the legendary distance-weapons that kill at ranges far beyond a sword, or use to their fullest potential the shells of ultrafast machinery called "cars".

"Binocular vision" would be too light a term for this ability.  We’ll only appreciate it once it has a properly impressive name, like Mystic Eyes of Depth Perception.

So here’s a list of some of my favorite magical powers:

Continue reading "Mundane Magic" »

GD Star Rating

FHI Emulation Roadmap Out

Sandberg and Bostrom of Oxford’s Future of Humanity Institute have just released a 130 technical report, "Whole Brain Emulation Roadmap":

Whole brain emulation (WBE), the possible future one‐to‐one modelling of the function of the human brain, is academically interesting and important for several reasons: … The economic impact of copyable brains could be immense, and could have profound societal consequences. Even low probability events of such magnitude merit investigation. …   

In order to develop ideas about the feasibility of WBE, ground technology foresight and stimulate interdisciplinary exchange, the Future of Humanity Institute hosted a workshop on May 26 and 27, 2007, in Oxford. Invited experts from areas such as computational neuroscience, brain‐scanning technology, computing, and neurobiology presented their findings and discussed the possibilities, problems and milestones that would have to be reached before WBE becomes feasible. …

This document combines an earlier whitepaper that was circulated among workshop participants, and additions suggested by those participants before, during and after the workshop. It aims at providing a preliminary roadmap for WBE, sketching out key technologies that would need to be developed or refined, and identifying key problems or uncertainties.

This report is a major milestone, a detailed summary of the state of the art on what I think is the most likely route to artificial intelligence, and the most likely cause of the next singularity. 

GD Star Rating
Tagged as: ,

Porn vs Romance Novels

From an ’04 book chapter by Cathy Salmon on porn vs. romance novels:

There is such a thing as a pornography consumed exclusively by women .. it is the romance novel.  Romance novels account for 40% of mass market paperback sales in the United States …. The realm of the romance novel, which might be called "romantopia," is a utopian erotic female counterfantasy to pornotopia.   Just as porn actresses exhibit a suspiciously male-like sexuality, romances are exercises in the imaginative transformation of masculinity to conform with female standards.  …

The public debate over pornography has been going on for years …. [and] had has covered everything from the treatment of women within the industry, to the image of women it presents and the impact of that image on men in the general population as well as the effects on women in the general population. … [Studies show] men who viewed sexually explicit films did not have negative attitudes toward women’s rights, nor were they more accepting of marital or date rape.  … [Regarding] the incidence of rape in several societies … increased availability was not associated with increased reports of rape. …

On a personal level, women often express concern over a partner’s regular purchasing of Playboy or watching pornographic videos.  In particular there is a verbalized concern that these things will effect their relationships.  … [And in fact] males that viewed images of attractive models reported being less committee to their partner after the viewing.  … Playboy centerfolds … got the same results.  … Modern media .. perhaps giving men an unrealistic view of how many attractive available women are out there.

If women complain porn hurts relationships by giving men unrealistic expectations, why don’t men complain romance novels hurt relationships by giving women unrealistic expectations?  Why so much more effort to regulate porn than romance novels?  Is it just that men complain less overall?  HT to Fortune Elkins.

Added 2Nov: Robert Wiblin found this Atlantic quote:

in a 2006 study, the Clemson economist Todd Kendall found that a 10 percent increase in Internet access is associated with a 7 percent decline in reported rapes.


GD Star Rating
Tagged as:

Intelligence in Economics

Followup toEconomic Definition of Intelligence?

After I challenged Robin to show how economic concepts can be useful in defining or measuring intelligence, Robin responded by – as I interpret it – challenging me to show why a generalized concept of "intelligence" is any use in economics.

Well, I’m not an economist (as you may have noticed) but I’ll try to respond as best I can.

My primary view of the world tends to be through the lens of AI.  If I talk about economics, I’m going to try to subsume it into notions like expected utility maximization (I manufacture lots of copies of something that I can use to achieve my goals) or information theory (if you manufacture lots of copies of something, my probability of seeing a copy goes up).  This subsumption isn’t meant to be some kind of challenge for academic supremacy – it’s just what happens if you ask an AI guy an econ question.

So first, let me describe what I see when I look at economics:

I see a special case of game theory in which some interactions are highly regular and repeatable:  You can take 3 units of steel and 1 unit of labor and make 1 truck that will transport 5 units of grain between Chicago and Manchester once per week, and agents can potentially do this over and over again.  If the numbers aren’t constant, they’re at least regular – there’s diminishing marginal utility, or supply/demand curves, rather than rolling random dice every time.  Imagine economics if no two elements of reality were fungible – you’d just have a huge incompressible problem in non-zero-sum game theory.

This may be, for example, why we don’t think of scientists writing papers that build on the work of other scientists in terms of an economy of science papers – if you turn an economist loose on science, they may measure scientist salaries paid in fungible dollars, or try to see whether scientists trade countable citations with each other.  But it’s much less likely to occur to them to analyze the way that units of scientific knowledge are produced from previous units plus scientific labor.  Where information is concerned, two identical copies of a file are the same information as one file.  So every unit of knowledge is unique, non-fungible, and so is each act of production.  There isn’t even a common currency that measures how much a given paper contributes to human knowledge.  (I don’t know what economists don’t know, so do correct me if this is actually extensively studied.)

Since "intelligence" deals with an informational domain, building a bridge from it to economics isn’t trivial – but where do factories come from, anyway?  Why do humans get a higher return on capital than chimpanzees?

Continue reading "Intelligence in Economics" »

GD Star Rating

Does Intelligence Float?

Einstein once said that a theory should be as simple as possible, but no simpler.  Similarly I recently remarked one’s actions should be as noble as possible, but no nobler.  Implicit in these statements are constraints: that a theory should be supported by evidence, and that actions should be feasible.  Sure you can find simpler theories that conflict strongly with evidence, or actions that look nobler if you ignore important real world constraints.  But that way leads to ruin. 

Similarly I’d say one should reason only as abstractly as possible, with the implicit constraint being that one should know what one is talking about.  I often complain about people who have little tolerance or ability to reason abstractly.  For example, doctors tend to be great at remembering details of similar cases but lousy at abstract reasoning.  But honestly I get equally bothered by folks who trade too easily in "floating abstractions," i.e., concepts whose meaning is prohibitively hard to infer from usage, such as when most usage refers to other floating abstractions.

Continue reading "Does Intelligence Float?" »

GD Star Rating
Tagged as:

Economic Definition of Intelligence?

Followup toEfficient Cross-Domain Optimization

Shane Legg once produced a catalogue of 71 definitions of intelligence.  Looking it over, you’ll find that the 18 definitions in dictionaries and the 35 definitions of psychologists are mere black boxes containing human parts.

However, among the 18 definitions from AI researchers, you can find such notions as

"Intelligence measures an agent’s ability to achieve goals in a wide range of environments" (Legg and Hutter)


"Intelligence is the ability to optimally use limited resources – including time – to achieve goals" (Kurzweil)

or even

"Intelligence is the power to rapidly find an adequate solution in what appears a priori (to observers) to be an immense search space" (Lenat and Feigenbaum)

which is about as close as you can get to my own notion of "efficient cross-domain optimization" without actually measuring optimization power in bits.

But Robin Hanson, whose AI background we’re going to ignore for a moment in favor of his better-known identity as an economist, at once said:

"I think what you want is to think in terms of a production function, which describes a system’s output on a particular task as a function of its various inputs and features."

Economists spend a fair amount of their time measuring things like productivity and efficiency.  Might they have something to say about how to measure intelligence in generalized cognitive systems?

This is a real question, open to all economists.  So I’m going to quickly go over some of the criteria-of-a-good-definition that stand behind my own proffered suggestion on intelligence, and what I see as the important challenges to a productivity-based view.  It seems to me that this is an important sub-issue of Robin’s and my persistent disagreement about the Singularity.

Continue reading "Economic Definition of Intelligence?" »

GD Star Rating


In the art world something is "edgy" if it might well shock ordinary folks, but of course not in-the-know folks.  The idea seems to be that ordinary folks are shocked too easily by things that should not really be shocking.

The opposite concept, which I’ll call "anti-edgy", is of something that does not shock ordinary folks, but should.  In the know folks are shocked, but most others are not.  Why does the world of art and fashion emphasize the edgy so much more than the anti-edgy? 

GD Star Rating
Tagged as:

Efficient Cross-Domain Optimization

Previously in seriesMeasuring Optimization Power

Is Deep Blue "intelligent"?  It was powerful enough at optimizing chess boards to defeat Kasparov, perhaps the most skilled chess player humanity has ever fielded.

A bee builds hives, and a beaver builds dams; but a bee doesn’t build dams and a beaver doesn’t build hives.  A human, watching, thinks, "Oh, I see how to do it" and goes on to build a dam using a honeycomb structure for extra strength.

Deep Blue, like the bee and the beaver, never ventured outside the narrow domain that it itself was optimized over.

There are no-free-lunch theorems showing that you can’t have a truly general intelligence that optimizes in all possible universes (the vast majority of which are maximum-entropy heat baths).  And even practically speaking, human beings are better at throwing spears than, say, writing computer programs.

But humans are much more cross-domain than bees, beavers, or Deep Blue.  We might even conceivably be able to comprehend the halting behavior of every Turing machine up to 10 states, though I doubt it goes much higher than that.

Every mind operates in some domain, but the domain that humans operate in isn’t "the savanna" but something more like "not too complicated processes in low-entropy lawful universes".  We learn whole new domains by observation, in the same way that a beaver might learn to chew a different kind of wood.  If I could write out your prior, I could describe more exactly the universes in which you operate.

Continue reading "Efficient Cross-Domain Optimization" »

GD Star Rating

Wanting To Want

What we actually want often diverges from what we wish we wanted.  One of the places where this conflict is clearest is in the features of others that attract us.  We are attracted to many features, including features of bodies, minds, and social networks.  We clearly put a large weight on body features, but we like to think we place more weight on other features, such as mental ones.  When we see how much we actually care about bodies we are disturbed, and perceive a conflict between what we want and what we want to want.  So why is there a conflict anyway – why are we built not to want to want what we want?

Consider that those with a better ability to distinguish a feature would naturally put more weight on that feature in when choosing.  If there is a pile of fruit and I have a short time to grab some fruit before others take them all, then if I can’t see colors well I’ll put less emphasize on colors in my choice.  After all, those who can see colors better will better be able to choose the ones with good colors.  Similarly, the better I am at distinguishing smart people, the more emphasize I’d naturally place on smarts when choosing people.

It is pretty easy for most people to tell how pretty someone is, but it is harder to tell how smart they are.  Having a high ability to tell how smart someone is says good things about you – in general it says you are pretty smart too.  And thus the fact that you put a high weight on smarts also says good things about you.  Since you have an interest in being thought well of, you also have an interest in being thought of as someone who puts a high weight on smarts. 

And serving your interests, evolution may well have arranged your mind to fool others into thinking that you put more weight on smarts than you actually do.  And this I suggest is the usual source of the conflict between what we want, and what we want to want.  We want what is useful to us, but we want to want what makes us look good to others.  We often fool ourselves into thinking that what we want to want is what we do want, and thereby also often fool others into thinking well of us.   

Note that in the case considered here, of looks vs. smarts, it is not at all obvious that what we want to want is better morally that what we actually want.  From a conversation with Katja Grace on this her birthday.

GD Star Rating
Tagged as:

Measuring Optimization Power

Previously in seriesAiming at the Target

Yesterday I spoke of how "When I think you’re a powerful intelligence, and I think I know something about your preferences, then I’ll predict that you’ll steer reality into regions that are higher in your preference ordering."

You can quantify this, at least in theory, supposing you have (A) the agent or optimization process’s preference ordering, and (B) a measure of the space of outcomes – which, for discrete outcomes in a finite space of possibilities, could just consist of counting them – then you can quantify how small a target is being hit, within how large a greater region.

Then we count the total number of states with equal or greater rank in the preference ordering to the outcome achieved, or integrate over the measure of states with equal or greater rank.  Dividing this by the total size of the space gives you the relative smallness of the target – did you hit an outcome that was one in a million?  One in a trillion?

Actually, most optimization processes produce "surprises" that are exponentially more improbable than this – you’d need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare.  So we take the log base two of the reciprocal of the improbability, and that gives us optimization power in bits.

This figure – roughly, the improbability of an "equally preferred" outcome being produced by a random selection from the space (or measure on the space) – forms the foundation of my Bayesian view of intelligence, or to be precise, optimization power.  It has many subtleties:

Continue reading "Measuring Optimization Power" »

GD Star Rating