Category Archives: AI

…And Say No More Of It

Followup toThe Thing That I Protect

Anything done with an ulterior motive has to be done with a pure heart.  You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity.  If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.

This doesn't mean that you never say anything about your Cause, but there's a balance to be struck.  "A fanatic is someone who can't change his mind and won't change the subject."

In previous months, I've pushed this balance too far toward talking about Singularity-related things.  And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS.  And so I just kept writing, because it was finally coming out.  For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.

When Less Wrong starts up, it will, by my own request, impose a two-month moratorium on discussion of "Friendly AI" and other Singularity/intelligence explosion-related topics.

There's a number of reasons for this.  One of them is simply to restore the balance.  Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.

But more importantly – there are certain subjects which tend to drive people crazy, even if there's truth behind them.  Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do.  Likewise Godel's Theorem, consciousness, Artificial Intelligence –

The concept of "Friendly AI" can be poisonous in certain ways.  True or false, it carries risks to mental health.  And not just the obvious liabilities of praising a Happy Thing.  Something stranger and subtler that drains enthusiasm.

Continue reading "…And Say No More Of It" »

GD Star Rating
loading...

The Thing That I Protect

Followup toSomething to Protect, Value is Fragile

"Something to Protect" discursed on the idea of wielding rationality in the service of something other than "rationality".  Not just that rationalists ought to pick out a Noble Cause as a hobby to keep them busy; but rather, that rationality itself is generated by having something that you care about more than your current ritual of cognition.

So what is it, then, that I protect?

I quite deliberately did not discuss that in "Something to Protect", leaving it only as a hanging implication.  In the unlikely event that we ever run into aliens, I don't expect their version of Bayes's Theorem to be mathematically different from ours, even if they generated it in the course of protecting different and incompatible values.  Among humans, the idiom of having "something to protect" is not bound to any one cause, and therefore, to mention my own cause in that post would have harmed its integrity.  Causes are dangerous things, whatever their true importance; I have written somewhat on this, and will write more about it.

But still – what is it, then, the thing that I protect?

Friendly AI?  No – a thousand times no – a thousand times not anymore.  It's not thinking of the AI that gives me strength to carry on even in the face of inconvenience.

Continue reading "The Thing That I Protect" »

GD Star Rating
loading...

Value is Fragile

Followup toThe Fun Theory Sequence, Fake Fake Utility Functions, Joy in the Merely Good, The Hidden Complexity of WishesThe Gift We Give To Tomorrow, No Universally Compelling Arguments, Anthropomorphic Optimism, Magical Categories, …

If I had to pick a single statement that relies on more Overcoming Bias content I've written than any other, that statement would be:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

"Well," says the one, "maybe according to your provincial human values, you wouldn't like it.  But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals.  And that's fine by me.  I'm not so bigoted as you are.  Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things -"

My friend, I have no problem with the thought of a galactic civilization vastly unlike our own… full of strange beings who look nothing like me even in their own imaginations… pursuing pleasures and experiences I can't begin to empathize with… trading in a marketplace of unimaginable goods… allying to pursue incomprehensible objectives… people whose life-stories I could never understand.

That's what the Future looks like if things go right.

If the chain of inheritance from human (meta)morals is broken, the Future does not look like this.  It does not end up magically, delightfully incomprehensible.

With very high probability, it ends up looking dull.  Pointless.  Something whose loss you wouldn't mourn.

Seeing this as obvious, is what requires that immense amount of background explanation.

Continue reading "Value is Fragile" »

GD Star Rating
loading...

Amputation of Destiny

Previously in seriesDevil's Offers
Followup toNonsentient Optimizers, Can't Unbirth a Child

From Consider Phlebas by Iain M. Banks:

    In practice as well as theory the Culture was beyond considerations of wealth or empire.  The very concept of money – regarded by the Culture as a crude, over-complicated and inefficient form of rationing – was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make.  These demands were satisfied, with one exception, from within the Culture itself.  Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core).  Thus the Culture had no need to colonise, exploit, or enslave.
    The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless.  The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but – where the circumstances appeared to Contact to justify so doing – actually interfering (overtly or covertly) in the historical processes of those other cultures.

Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture.  Which is to say: Iain Banks is the one to beat.

Continue reading "Amputation of Destiny" »

GD Star Rating
loading...

Can’t Unbirth a Child

Followup toNonsentient Optimizers

Why would you want to avoid creating a sentient AI?  "Several reasons," I said.  "Picking the simplest to explain first – I'm not ready to be a father."

So here is the strongest reason:

You can't unbirth a child.

I asked Robin Hanson what he would do with unlimited power.  "Think very very carefully about what to do next," Robin said.  "Most likely the first task is who to get advice from.  And then I listen to that advice."

Good advice, I suppose, if a little meta.  On a similarly meta level, then, I recall two excellent advices for wielding too much power:

  1. Do less; don't do everything that seems like a good idea, but only what you must do.
  2. Avoid doing things you can't undo.

Continue reading "Can’t Unbirth a Child" »

GD Star Rating
loading...

Nonsentient Optimizers

Followup to: Nonperson Predicates, Possibility and Could-ness

    "All our ships are sentient.  You could certainly try telling a ship what to do… but I don't think you'd get very far."
    "Your ships think they're sentient!" Hamin chuckled.
    "A common delusion shared by some of our human citizens."
            — Player of Games, Iain M. Banks

Yesterday, I suggested that, when an AI is trying to build a model of an environment that includes human beings, we want to avoid the AI constructing detailed models that are themselves people.  And that, to this end, we would like to know what is or isn't a person – or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

And as long as you're going to solve that problem anyway, why not apply the same knowledge to create a Very Powerful Optimization Process which is also definitely not a person?

"What?  That's impossible!"

How do you know?  Have you solved the sacred mysteries of consciousness and existence?

"Um – okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself -"

Lob's Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model.  Does that make it sentient?

"Putting that aside – to create a powerful AI and make it not sentient – I mean, why would you want to?"

Several reasons.  Picking the simplest to explain first – I'm not ready to be a father.

Continue reading "Nonsentient Optimizers" »

GD Star Rating
loading...

Nonperson Predicates

Followup toRighting a Wrong Question, Zombies! Zombies?, A Premature Word on AI, On Doing the Impossible

There is a subproblem of Friendly AI which is so scary that I usually don't talk about it, because only a longtime reader of Overcoming Bias would react to it appropriately – that is, by saying, "Wow, that does sound like an interesting problem", instead of finding one of many subtle ways to scream and run away.

This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves.  Not necessarily the same person, but people nonetheless.

If you look up at the night sky, and see the tiny dots of light that move over days and weeks – planētoi, the Greeks called them, "wanderers" – and you try to predict the movements of those planet-dots as best you can…

Historically, humans went through a journey as long and as wandering as the planets themselves, to find an accurate model.  In the beginning, the models were things of cycles and epicycles, not much resembling the true Solar System.

But eventually we found laws of gravity, and finally built models – even if they were just on paper – that were extremely accurate so that Neptune could be deduced by looking at the unexplained perturbation of Uranus from its expected orbit.  This required moment-by-moment modeling of where a simplified version of Uranus would be, and the other known planets.  Simulation, not just abstraction.  Prediction through simplified-yet-still-detailed pointwise similarity.

Suppose you have an AI that is around human beings.  And like any Bayesian trying to explain its enivornment, the AI goes in quest of highly accurate models that predict what it sees of humans.

Models that predict/explain why people do the things they do, say the things they say, want the things they want, think the things they think, and even why people talk about "the mystery of subjective experience".

The model that most precisely predicts these facts, may well be a 'simulation' detailed enough to be a person in its own right.

Continue reading "Nonperson Predicates" »

GD Star Rating
loading...

Devil’s Offers

Previously in seriesHarmful Options

An iota of fictional evidence from The Golden Age by John C. Wright:

    Helion had leaned and said, "Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command.  You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit.  There are two temptations which will threaten you.  First, you will be tempted to remove your human weaknesses by abrupt mental surgery.  The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain.  Second, you will be tempted to indulge your human weakness.  The Cacophiles do this, and to a lesser degree, so do the Black Manorials.  Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden.  Free men may freely harm themselves, provided only that it is only themselves that they harm."
    Phaethon knew what his sire was intimating, but he did not let himself feel irritated.  Not today.  Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
    Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second.  Many folk were not trusted with the full powers of an adult until they reached their Centennial.  Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early…

Continue reading "Devil’s Offers" »

GD Star Rating
loading...

Living By Your Own Strength

Previously in seriesSensual Experience
Followup toTruly Part of You

"Myself, and Morisato-san… we want to live together by our own strength."

Jared Diamond once called agriculture "the worst mistake in the history of the human race".  Farmers could grow more wheat than hunter-gatherers could collect nuts, but the evidence seems pretty conclusive that agriculture traded quality of life for quantity of life.  One study showed that the farmers in an area were six inches shorter and seven years shorter-lived than their hunter-gatherer predecessors – even though the farmers were more numerous.

I don't know if I'd call agriculture a mistake.  But one should at least be aware of the downsides.  Policy debates should not appear one-sided.

In the same spirit –

Once upon a time, our hunter-gatherer ancestors strung their own bows, wove their own baskets, whittled their own flutes.

And part of our alienation from that environment of evolutionary adaptedness, is the number of tools we use that we don't understand and couldn't make for ourselves.

Continue reading "Living By Your Own Strength" »

GD Star Rating
loading...

What I Think, If Not Why

Reply toTwo Visions Of Heritage

Though it really goes tremendously against my grain – it feels like sticking my neck out over a cliff (or something) – I guess I have no choice here but to try and make a list of just my positions, without justifying them.  We can only talk justification, I guess, after we get straight what my positions are.  I will also leave off many disclaimers to present the points compactly enough to be remembered.

• A well-designed mind should be much more efficient than a human, capable of doing more with less sensory data and fewer computing operations.  It is not infinitely efficient and does not use zero data.  But it does use little enough that local pipelines such as a small pool of programmer-teachers and, later, a huge pool of e-data, are sufficient.

• An AI that reaches a certain point in its own development becomes able to (sustainably, strongly) improve itself.  At this point, recursive cascades slam over many internal growth curves to near the limits of their current hardware, and the AI undergoes a vast increase in capability.  This point is at, or probably considerably before, a minimally transhuman mind capable of writing its own AI-theory textbooks – an upper bound beyond which it could swallow and improve its entire design chain.

• It is likely that this capability increase or "FOOM" has an intrinsic maximum velocity that a human would regard as "fast" if it happens at all.  A human week is ~1e15 serial operations for a population of 2GHz cores, and a century is ~1e19 serial operations; this whole range is a narrow window.  However, the core argument does not require one-week speed and a FOOM that takes two years (~1e17 serial ops) will still carry the weight of the argument.

Continue reading "What I Think, If Not Why" »

GD Star Rating
loading...