Category Archives: Personal

The Magnitude of His Own Folly

Followup toMy Naturalistic Awakening, Above-Average AI Scientists

In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.

In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn’t just find the right fitness metric for an evolutionary algorithm.  (Previously he had been very impressed with evolutionary algorithms.)

And the one said:  Oh, woe!  Oh, alas!  What a fool I’ve been!  Through my carelessness, I almost destroyed the world!  What a villain I once was!

Now, there’s a trap I knew I better than to fall into –

– at the point where, in late 2002, I looked back to Eliezer1997‘s AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they "really would have done".

When I finally saw the magnitude of my own folly, everything fell into place at once.  The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together.  There wasn’t a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid.  I already knew how.

And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.

It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when – I understood in the same moment of realization – my ego ought to be taking a major punch to the stomach.  I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.

Continue reading "The Magnitude of His Own Folly" »

GD Star Rating

Give it to Me Straight! I Swear I Won’t be Mad!

I have an American friend (same guy as in this earlier post) who lived for a number of years in Mexico.  He married a Mexican woman, and while he always spoke to his kids in English, their real first language was Spanish.  He recently moved back to the U.S., and he enrolled his oldest daughter in kindergarten.  The school gave her some kind of language evaluation, and they concluded that she was slightly behind in English, and said they would like to give her some kind of limited special instruction if her parents wanted it.  My friend and his wife were inclined to go along with what the teachers thought, but they wanted to know the answers to a few common-sense questions: how behind was the kid really, was what they would do for her during the special instruction time really worth giving up whatever she would miss in the regular class, and so on.  The problem was, they were having a hard time getting any straight answers out of the teachers, and they were pretty sure they knew why: these very nice, well-meaning teachers were so worried about offending them that they couched every answer in a million caveats and weasel words.  My friend said he said he was dying to say something like: "I hereby unconditionally vow not to sue you, hate you, or speak or think ill of you in any way.  Now will you please just tell me what’s going on with my kid?!?"

Don’t get me wrong: that hyper-sensitivity comes mostly from a good place, and I certainly don’t want to go back 50 years when a kid like that would just be thrown in the deep end of the pool.  But come on!

GD Star Rating
Tagged as:

My Naturalistic Awakening

Followup toFighting a Rearguard Action Against the Truth

In yesterday’s episode, Eliezer2001 is fighting a rearguard action against the truth.  Only gradually shifting his beliefs, admitting an increasing probability in a different scenario, but never saying outright, "I was wrong before."  He repairs his strategies as they are challenged, finding new justifications for just the same plan he pursued before.

(Of which it is therefore said:  "Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated.  Surrender to the truth as quickly as you can.  Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.")

Memory fades, and I can hardly bear to look back upon those times – no, seriously, I can’t stand reading my old writing.  I’ve already been corrected once in my recollections, by those who were present.  And so, though I remember the important events, I’m not really sure what order they happened in, let alone what year.

But if I had to pick a moment when my folly broke, I would pick the moment when I first comprehended, in full generality, the notion of an optimization process.  That was the point at which I first looked back and said, "I’ve been a fool."

Continue reading "My Naturalistic Awakening" »

GD Star Rating

Fighting a Rearguard Action Against the Truth

Followup toThat Tiny Note of Discord, The Importance of Saying "Oops"

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don’t matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

But as our story begins – as the sky lightens to gray and the tip of the sun peeks over the horizon – Eliezer2001 hasn’t yet admitted that Eliezer1997 was mistaken in any important sense.  He’s just making Eliezer1997‘s strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"…

…which means that Eliezer2001 now has a line of retreat away from his mistake.

I don’t just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn’t have to cough out his whole mistake in one huge lump.

If you think this sounds like Eliezer2001 is too slow, I quite agree.

Continue reading "Fighting a Rearguard Action Against the Truth" »

GD Star Rating

That Tiny Note of Discord

Followup toThe Sheer Folly of Callow Youth

When we last left Eliezer1997, he believed that any superintelligence would automatically do what was "right", and indeed would understand that better than we could; even though, he modestly confessed, he did not understand the ultimate nature of morality.  Or rather, after some debate had passed, Eliezer1997 had evolved an elaborate argument, which he fondly claimed to be "formal", that we could always condition upon the belief that life has meaning; and so cases where superintelligences did not feel compelled to do anything in particular, would fall out of consideration.  (The flaw being the unconsidered and unjustified equation of "universally compelling argument" with "right".)

So far, the young Eliezer is well on the way toward joining the "smart people who are stupid because they’re skilled at defending beliefs they arrived at for unskilled reasons".  All his dedication to "rationality" has not saved him from this mistake, and you might be tempted to conclude that it is useless to strive for rationality.

But while many people dig holes for themselves, not everyone succeeds in clawing their way back out.

And from this I learn my lesson:  That it all began –

– with a small, small question; a single discordant note; one tiny lonely thought…

Continue reading "That Tiny Note of Discord" »

GD Star Rating

The Sheer Folly of Callow Youth

Followup toMy Childhood Death Spiral, My Best and Worst Mistake, A Prodigy of Refutation

"There speaks the sheer folly of callow youth; the rashness of an ignorance so abysmal as to be possible only to one of your ephemeral race…"
        — Gharlane of Eddore

Once upon a time, years ago, I propounded a mysterious answer to a mysterious question – as I’ve hinted on several occasions.  The mysterious question to which I propounded a mysterious answer was not, however, consciousness – or rather, not only consciousness.  No, the more embarrassing error was that I took a mysterious view of morality.

I held off on discussing that until now, after the series on metaethics, because I wanted it to be clear that Eliezer1997 had gotten it wrong.

When we last left off, Eliezer1997, not satisfied with arguing in an intuitive sense that superintelligence would be moral, was setting out to argue inescapably that creating superintelligence was the right thing to do.

Well (said Eliezer1997) let’s begin by asking the question:  Does life have, in fact, any meaning?

Continue reading "The Sheer Folly of Callow Youth" »

GD Star Rating

A Prodigy of Refutation

Followup toMy Childhood Death Spiral, Raised in Technophilia

My Childhood Death Spiral described the core momentum carrying me into my mistake, an affective death spiral around something that Eliezer1996 called "intelligence".  I was also a technophile, pre-allergized against fearing the future.  And I’d read a lot of science fiction built around personhood ethics – in which fear of the Alien puts humanity-at-large in the position of the bad guys, mistreating aliens or sentient AIs because they "aren’t human".

That’s part of the ethos you acquire from science fiction – to define your in-group, your tribe, appropriately broadly.  Hence my email address,

So Eliezer1996 is out to build superintelligence, for the good of humanity and all sentient life.

At first, I think, the question of whether a superintelligence will/could be good/evil didn’t really occur to me as a separate topic of discussion.  Just the standard intuition of, "Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what’s right far better than a human being could."

Until I introduced myself and my quest to a transhumanist mailing list, and got back responses along the general lines of (from memory):

Continue reading "A Prodigy of Refutation" »

GD Star Rating

Raised in Technophilia

Followup toMy Best and Worst Mistake

My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.

One of my major childhood influences was reading Jerry Pournelle’s A Step Farther Out, at the age of nine.  It was Pournelle’s reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away.  It was a reply to Jeremy Rifkin’s so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

I grew up in a world where the lines of demarcation between the Good Guys and the Bad Guys were pretty clear; not an apocalyptic final battle, but a battle that had to be fought over and over again, a battle where you could see the historical echoes going back to the Industrial Revolution, and where you could assemble the historical evidence about the actual outcomes.

On one side were the scientists and engineers who’d driven all the standard-of-living increases since the Dark Ages, whose work supported luxuries like democracy, an educated populace, a middle class, the outlawing of slavery.

On the other side, those who had once opposed smallpox vaccinations, anesthetics during childbirth, steam engines, and heliocentrism:  The theologians calling for a return to a perfect age that never existed, the elderly white male politicians set in their ways, the special interest groups who stood to lose, and the many to whom science was a closed book, fearing what they couldn’t understand.

And trying to play the middle, the pretenders to Deep Wisdom, uttering cached thoughts about how technology benefits humanity but only when it was properly regulated – claiming in defiance of brute historical fact that science of itself was neither good nor evil – setting up solemn-looking bureaucratic committees to make an ostentatious display of their caution – and waiting for their applause.  As if the truth were always a compromise.  And as if anyone could really see that far ahead.  Would humanity have done better if there’d been a sincere, concerned, public debate on the adoption of fire, and commitees set up to oversee its use?

Continue reading "Raised in Technophilia" »

GD Star Rating

My Best and Worst Mistake

Followup toMy Childhood Death Spiral

Yesterday I covered the young Eliezer’s affective death spiral around something that he called "intelligence".  Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition – consciously, deliberately refused.  Indeed, he would have been loath to put any definition on "intelligence" at all.

Why?  Because there’s a standard bait-and-switch problem in AI, wherein you define "intelligence" to mean something like "logical reasoning" or "the ability to withdraw conclusions when they are no longer appropriate", and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, "Lo, I have implemented intelligence!"  People came up with poor definitions of intelligence – focusing on correlates rather than cores – and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence.  It’s not like Eliezer1996 was out to build a career in Artificial Intelligence.  He just wanted a mind that would actually be able to build nanotechnology.  So he wasn’t tempted to redefine intelligence for the sake of puffing up a paper.

Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else stupidity:  Having seen attempts to define "intelligence" abused so often, I refused to define it at all.  What if I said that intelligence was X, and it wasn’t really X?  I knew in an intuitive sense what I was looking for – something powerful enough to take stars apart for raw material – and I didn’t want to fall into the trap of being distracted from that by definitions.

Similarly, having seen so many AI projects brought down by physics envy – trying to stick with simple and elegant math, and being constrained to toy systems as a result – I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence.  "Except for Bayes’s Theorem," Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

Continue reading "My Best and Worst Mistake" »

GD Star Rating

My Childhood Death Spiral

Followup toAffective Death Spirals, My Wild and Reckless Youth

My parents always used to downplay the value of intelligence.  And play up the value of – effort, as recommended by the latest research?  No, not effort.  Experience.  A nicely unattainable hammer with which to smack down a bright young child, to be sure.  That was what my parents told me when I questioned the Jewish religion, for example.  I tried laying out an argument, and I was told something along the lines of:  "Logic has limits, you’ll understand when you’re older that experience is the important thing, and then you’ll see the truth of Judaism."  I didn’t try again.  I made one attempt to question Judaism in school, got slapped down, didn’t try again.  I’ve never been a slow learner.

Whenever my parents were doing something ill-advised, it was always, "We know better because we have more experience.  You’ll understand when you’re older: maturity and wisdom is more important than intelligence."

If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I’ve ever heard of.

But my parents aren’t that cunning, and the results weren’t exactly positive.

Continue reading "My Childhood Death Spiral" »

GD Star Rating