Monthly Archives: August 2008

Dreams of AI Design

Followup toAnthropomorphic Optimism, Three Fallacies of Teleology

After spending a decade or two living inside a mind, you might think you knew a bit about how minds work, right?  That’s what quite a few AGI wannabes (people who think they’ve got what it takes to program an Artificial General Intelligence) seem to have concluded.  This, unfortunately, is wrong.

Artificial Intelligence is fundamentally about reducing the mental to the non-mental.

You might want to contemplate that sentence for a while.  It’s important.

Living inside a human mind doesn’t teach you the art of reductionism, because nearly all of the work is carried out beneath your sight, by the opaque black boxes of the brain.  So far beneath your sight that there is no introspective sense that the black box is there – no internal sensory event marking that the work has been delegated.

Did Aristotle realize that when he talked about the telos, the final cause of events, that he was delegating predictive labor to his brain’s complicated planning mechanisms – asking, "What would this object do, if it could make plans?"  I rather doubt it.  Aristotle thought the brain was an organ for cooling the blood – which he did think was important:  Humans, thanks to their larger brains, were more calm and contemplative.

So there’s an AI design for you!  We just need to cool down the computer a lot, so it will be more calm and contemplative, and won’t rush headlong into doing stupid things like modern computers.

Continue reading "Dreams of AI Design" »

GD Star Rating

Top Docs No Healthier

My two years a RWJF Health Policy Scholar exposed me to enough data to make me a skeptic on the marginal aggregate health value of medicine.  But where data is silent I try to give medicine the benefit of the doubt, such as in assuming average values are higher than marginal values, and that top med school docs give more value than others.   So I am shocked to report that in a randomized trial of 72,000 hospital stays by 30,000 patients, patients of top med school docs were no healthier:

The school affiliated with Program A is the top school in the nation when ranked by the incoming students’ MCAT scores, and it is always near the top. In comparison, the lower-ranked program [B] that serves this VA hospital is near the median of medical schools. … [Added: other ways A beats B here.] Patients treated by the two teams have identical observable characteristics and have access to a single set of facilities and ancillary staff. …

Health outcomes are not related to the physician team assignment. … Program B is associated with … a 0.3 percentage-point reduction in 5-year mortality (or 0.6% of the mean).  … The confidence interval is [-0.0162, 0.0106]. …

Continue reading "Top Docs No Healthier" »

GD Star Rating
Tagged as:

Three Fallacies of Teleology

Followup toAnthropomorphic Optimism

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as "cause", though Wikipedia suggests that a better translation is "maker".  Aristotle’s theory of the Four Causes, then, might be better translated as the Four Makers.  These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance’s form, its statue-shaped-ness.  The efficient aition best translates as the English word "cause"; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order.  In this, Aristotle followed in the path of Plato, who had earlier written:

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause.  It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it.  That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid.  As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force…

Continue reading "Three Fallacies of Teleology" »

GD Star Rating

Use the Native Architecture

Imagine writing two versions of the same computer program. The first represents its integers as 32-bit binary numbers.  The second writes the numbers in base 10, ASCII strings with each byte used to store one digit.

The second version has its upsides.  Thirty-two bit numbers max out at several billion, but you can keep tacking digits onto the string until you’re out of memory.

That said, the program that uses 32-bit integers runs faster because it uses the native architecture of the CPU.  The CPU was designed with this more compact format for numbers in mind, with special-purpose circuits like 32 bit adders.

The same principle applies to using one’s brain:  Some things the brain can do quickly and intuitively, and some things the brain has to emulate using many more of the brain’s native operations.  Sometimes thinking in metaphors is a good idea, if you’re human.

In particular, visualizing things is part of the brain’s native architecture, but abstract symbolic manipulation has to be learned.  Thus, visualizing mathematics is usually a good idea.

When was the last time you made a sign error?

When was the last time you visualized something upside-down by mistake?

I thought so.

Continue reading "Use the Native Architecture" »

GD Star Rating
Tagged as:

Cowen Disses Futarchy

From a recent Telegraph article:

Professor Tyler Cowen, also of George Mason University, thinks that the problem of bad governance is far too complex to be solved simply by making predictions of how policy decisions may or may not turn out. "I don’t agree with the futarchy idea," he says. "The record of prediction markets is a strong one, but I wouldn’t want to use them to run an entire government.

Imagine a similar statement on voting:

The problem of bad governance is far too complex to be solved simply by having citizens elect representatives.  The record of representatives is strong, but I wouldn’t want to use them to run an entire government.

Or imagine similar statements about propositions, laws, judges, administrative agencies, public hearings, free press, constitutions, etc.  See the problem?  Every institutional mechanism is going go be far simpler than the complex problems to be solved – can that really be a reason to reject them all?

GD Star Rating
Tagged as: ,

Magical Categories

Followup toAnthropomorphic Optimism, Superexponential Conceptspace, The Hidden Complexity of Wishes, Unnatural Categories

‘We can design intelligent machines so their primary, innate emotion is unconditional love for all humans.  First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language.  Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy.’
        — Bill Hibbard (2001), Super-intelligent machines.

That was published in a peer-reviewed journal, and the author later wrote a whole book about it, so this is not a strawman position I’m discussing here.

So… um… what could possibly go wrong…

When I mentioned (sec. 6) that Hibbard’s AI ends up tiling the galaxy with tiny molecular smiley-faces, Hibbard wrote an indignant reply saying:

‘When it is feasible to build a super-intelligence, it will be feasible to build hard-wired recognition of "human facial expressions, human voices and human body language" (to use the words of mine that you quote) that exceed the recognition accuracy of current humans such as you and me, and will certainly not be fooled by "tiny molecular pictures of smiley-faces." You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.’

Continue reading "Magical Categories" »

GD Star Rating

Randomised Controlled Trials of Parachutes

It is tempting to react to unscientific methods of medical practice by rejecting any treatment that isn’t supported by rigorous scientific evidence.  Here’s a parody of naive implementations of evidence-based medicine that demonstrates the pitfalls of doing so:
Smith GCS, Pell JP. (2003). Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ, 327(7429), 1459-1461.

From the paper:

Results We were unable to identify any randomised controlled trials of parachute intervention.

Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.

There are some interesting comments on the paper here and here.

GD Star Rating
Tagged as:

Unnatural Categories

Followup toDisguised Queries, Superexponential Conceptspace

If a tree falls in the forest, and no one hears it, does it make a sound?

"Tell me why you want to know," says the rationalist, "and I’ll tell you the answer."  If you want to know whether your seismograph, located nearby, will register an acoustic wave, then the experimental prediction is "Yes"; so, for seismographic purposes, the tree should be considered to make a sound.  If instead you’re asking some question about firing patterns in a human auditory cortex – for whatever reason – then the answer is that no such patterns will be changed when the tree falls.

What is a poison?  Hemlock is a "poison"; so is cyanide; so is viper venom.  Carrots, water, and oxygen are "not poison".  But what determines this classification?  You would be hard pressed, just by looking at hemlock and cyanide and carrots and water, to tell what sort of difference is at work.  You would have to administer the substances to a human – preferably one signed up for cryonics – and see which ones proved fatal.  (And at that, the definition is still subtler than it appears: a ton of carrots, dropped on someone’s head, will also prove fatal. You’re really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)

Where poison-ness is concerned, you are not classifying via a strictly local property of the substance.  You are asking about the consequence when a dose of that substance is applied to a human metabolism.  The local difference between a human who gasps and keels over, versus a human alive and healthy, is more compactly discriminated, than any local difference between poison and non-poison.

Continue reading "Unnatural Categories" »

GD Star Rating

Good Medicine in Merry Old England

Here’s the abstract of an article by Martin, Rice, & Smith in the current issue of the Journal of Health Economics (generally regarded as the top journal in the field):

Empirical evidence has hitherto been inconclusive about the strength of the link between health care spending and health outcomes. This paper uses programme budgeting data prepared by 295 English Primary Care Trusts to model the link for two specific programmes of care: cancer and circulatory diseases. A theoretical model is developed in which decision-makers must allocate a fixed budget across programmes of care so as to maximize social welfare, in the light of a health production function for each programme. This yields an expenditure equation and a health outcomes equation for each programme. These are estimated for the two programmes of care using instrumental variables methods. All the equations prove to be well specified. They suggest that the cost of a life year saved in cancer is about £13,100, and in circulation about £8000. These results challenge the widely held view that health care has little marginal impact on health. From a policy perspective, they can help set priorities by informing resource allocation across programmes of care. They can also help health technology agencies decide whether their cost-effectiveness thresholds for accepting new technologies are set at the right level.

One shouldn’t overstate the importance of this; it’s only one study and it only deals with two medical conditions.  And of course the study was done on English data, not U.S. data.  We all know that there is evidence that the marginal unit of U.S. medicine has little or no health benefit, so this would be a noteworthy result if the study were done on U.S. data.  I don’t know how noteworthy it is for English data.  Does anybody know if there is any RAND study type evidence about the effectiveness of the marginal unit of medicine in England or in other European countries?

When I was a kid, a cousin who lived in England came to visit us and showed me how to crack open those little plastic cubes containing the four one-use camera flashbulbs we had back then and set them off with a battery.  That totally rocked my world.  So as far as I’m concerned those guys are all geniuses.

GD Star Rating
Tagged as:

Mirrors and Paintings

Followup toSorting Pebbles Into Correct Heaps, Invisible Frameworks

Background: There’s a proposal for Friendly AI called "Coherent Extrapolated Volition" which I don’t really want to divert the discussion to, right now.  Among many other things, CEV involves pointing an AI at humans and saying (in effect) "See that?  That’s where you find the base content for self-renormalizing morality."

Hal Finney commented on the Pebblesorter parable:

I wonder what the Pebblesorter AI would do if successfully programmed to implement [CEV]…  Would the AI pebblesort?  Or would it figure that if the Pebblesorters got smarter, they would see that pebblesorting was pointless and arbitrary?  Would the AI therefore adopt our own parochial morality, forbidding murder, theft and sexual intercourse among too-young people?  Would that be the CEV of Pebblesorters?

I imagine we would all like to think so, but it smacks of parochialism, of objective morality.  I can’t help thinking that Pebblesorter CEV would have to include some aspect of sorting pebbles.  Doesn’t that suggest that CEV can malfunction pretty badly?

I’m giving this question its own post, for that it touches on similar questions I once pondered – dilemmas that forced my current metaethics as the resolution.

Yes indeed:  A CEV-type AI, taking Pebblesorters as its focus, would wipe out the Pebblesorters and sort the universe into prime-numbered heaps.

This is not the right thing to do.

That is not a bug.

Continue reading "Mirrors and Paintings" »

GD Star Rating