Monthly Archives: June 2008

No Universally Compelling Arguments

Followup toThe Design Space of Minds-in-General, Ghosts in the Machine, A Priori

What is so terrifying about the idea that not every possible mind might agree with us, even in principle?

For some folks, nothing – it doesn’t bother them in the slightest. And for some of those folks, the reason it doesn’t bother them is that they don’t have strong intuitions about standards and truths that go beyond personal whims.  If they say the sky is blue, or that murder is wrong, that’s just their personal opinion; and that someone else might have a different opinion doesn’t surprise them.

For other folks, a disagreement that persists even in principle is something they can’t accept.  And for some of those folks, the reason it bothers them, is that it seems to them that if you allow that some people cannot be persuaded even in principle that the sky is blue, then you’re conceding that "the sky is blue" is merely an arbitrary personal opinion.

Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space.  If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization "All minds m: X(m)" has two to the trillionth chances to be false, while each existential generalization "Exists mind m: X(m)" has two to the trillionth chances to be true.

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn’t buy it.

And the surprise and/or horror of this prospect (for some) has a great deal to do, I think, with the intuition of the ghost-in-the-machine – a ghost with some irreducible core that any truly valid argument will convince.

Continue reading "No Universally Compelling Arguments" »

GD Star Rating
loading...

The Design Space of Minds-In-General

< ?xml version="1.0" standalone="yes"?> < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">

Followup toThe Psychological Unity of Humankind

People ask me, “What will Artificial Intelligences be like?  What will they do?  Tell us your amazing story about the future.”

And lo, I say unto them, “You have asked me a trick question.”

ATP synthase is a molecular machine – one of three known occasions when evolution has invented the freely rotating wheel – which is essentially the same in animal mitochondria, plant chloroplasts, and bacteria.  ATP synthase has not changed significantly since the rise of eukaryotic life two billion years ago.  It’s is something we all have in common -  thanks to the way that evolution strongly conserves certain genes; once many other genes depend on a gene, a mutation will tend to break all the dependencies.

Any two AI designs might be less similar to each other than you are to a petunia.

Continue reading "The Design Space of Minds-In-General" »

GD Star Rating
loading...

Should Bad Boys Win?

Futurepundit:

Why do psychopaths exist? The ladies help the psychopaths reproduce by going to bed with them. Men who are narcissistic, self-obsessed, liars, psychopaths, Machiavellian, and thrill-seekers get laid more.

Bad boys, it seems, really do get all the girls. Women might claim they want caring, thoughtful types but scientists have discovered what they really want – self-obsessed, lying psychopaths.

OK this isn’t really news to most people.  But still it raises a basic question.  The basic fact of mate selection is that men are collectively greatly responsible for which female traits win in competition for male attention, while women are collectively greatly responsible for which male traits win in competition for female attention.  Accepting this, here are some possible responses to the above results:

  • The result is just wrong, such men do not get more women
  • The result is correct, such men do get more women
    • This is good, these are just the sort of men we want more of
      • Such men are good for each woman they are with
      • Such men are bad for each women, but good for women overall
      • Such men are bad for women overall, but still good overall
    • This is bad, these are not the sort of men we want more of
      • Such behavior results from an inefficient signaling game
      • By choosing such men, women help themselves but hurt other women
      • This is a gender power struggle, where such men are overall good for men but bad for women

What say ye?  And why so little discussion on the gender reversed questions – do we want more of the kinds of women who win when competing to attract men?

GD Star Rating
loading...
Tagged as:

The Psychological Unity of Humankind

Followup toEvolutions Are Stupid (But Work Anyway), Evolutionary Psychology

Biological organisms in general, and human brains particularly, contain complex adaptations; adaptations which involve many genes working in concert. Complex adaptations must evolve incrementally, gene by gene.  If gene B depends on gene A to produce its effect, then gene A has to become nearly universal in the gene pool before there’s a substantial selection pressure in favor of gene B.

A fur coat isn’t an evolutionary advantage unless the environment reliably throws cold weather at you.  And other genes are also part of the environment; they are the genetic environment.  If gene B depends on gene A, then gene B isn’t a significant advantage unless gene A is reliably part of the genetic environment.

Let’s say that you have a complex adaptation with six interdependent parts, and that each of the six genes is independently at ten percent frequency in the population.  The chance of assembling a whole working adaptation is literally a million to one; and the average fitness of the genes is tiny, and they will not increase in frequency.

In a sexually reproducing species, complex adaptations are necessarily universal.

Continue reading "The Psychological Unity of Humankind" »

GD Star Rating
loading...

Eliezer’s Meta-Level Determinism

Thank you esteemed co-blogger Eliezer, for your down payment on future engagement of our clash of intuitions.  I too am about to travel and must return to other distractions which I have neglected. 

Some preliminary comments.  First, to be clear, my estimate of future growth rates based on past trends is intended to be unconditional – I do not claim future rates are independent of which is the next big meta innovation, though I am rather uncertain about which next innovations would have which rates.   

Second, my claim to estimate the impact of the next big innovation and Eliezer’s claim to estimate a much larger impact from "full AGI" are not yet obviously in conflict – to my knowledge, neither Eliezer nor I claims full AGI will be the next big innovation, nor does Eliezer argue for a full AGI time estimate that conflicts with my estimating timing of the next big innovation. 

Third, it seems the basis for Eliezer’s claim that my analysis is untrustworthy "surface analogies" vs. his reliable "deep causes" is that while I use long-vetted general social science understandings of factors influencing innovation, he uses his own new untested meta-level determinism theory.  So it seems he could accept that those not yet willing to accept his new theory might instead reasonably rely on my analysis. 

Fourth, while Eliezer outlines his new theory and its implications for overall growth rates, he has as yet said nothing about what his theory implies for transition inequality, and how those implications might differ from my estimates. 

OK, now for the meat.  My story of everything was told (at least for recent eras) in terms of realized capability, i.e., population and resource use, and was largely agnostic about the specific innovations underlying the key changes.  Eliezer’s story is that key changes are largely driven by structural changes in optimization processes and their protected meta-levels:

Continue reading "Eliezer’s Meta-Level Determinism" »

GD Star Rating
loading...
Tagged as:

Optimization and the Singularity

Lest anyone get the wrong impression, I’m juggling multiple balls right now and can’t give the latest Singularity debate as much attention as it deserves.  But lest I annoy my esteemed co-blogger, here is a down payment on my views of the Singularity – needless to say, all this is coming way out of order in the posting sequence, but here goes…

Among the topics I haven’t dealt with yet, and will have to introduce here very quickly, is the notion of an optimization process.  Roughly, this is the idea that your power as a mind is your ability to hit small targets in a large search space – this can be either the space of possible futures (planning) or the space of possible designs (invention).  Suppose you have a car, and suppose we already know that your preferences involve travel.  Now suppose that you take all the parts in the car, or all the atoms, and jumble them up at random.  It’s very unlikely that you’ll end up with a travel-artifact at all, even so much as a wheeled cart; let alone a travel-artifact that ranks as high in your preferences as the original car.  So, relative to your preference ordering, the car is an extremely improbable artifact; the power of an optimization process is that it can produce this kind of improbability.

You can view both intelligence and natural selection as special cases of optimization:  Processes that hit, in a large search space, very small targets defined by implicit preferences.  Natural selection prefers more efficient replicators.  Human intelligences have more complex preferences.  Neither evolution nor humans have consistent utility functions, so viewing them as "optimization processes" is understood to be an approximation.  You’re trying to get at the sort of work being done, not claim that humans or evolution do this work perfectly.

This is how I see the story of life and intelligence – as a story of improbably good designs being produced by optimization processes.  The "improbability" here is improbability relative to a random selection from the design space, not improbability in an absolute sense – if you have an optimization process around, then "improbably" good designs become probable.

Obviously I’m skipping over a lot of background material here; but you can already see the genesis of a clash of intuitions between myself and Robin.  Robin’s looking at populations and resource utilization.  I’m looking at production of improbable patterns.

Continue reading "Optimization and the Singularity" »

GD Star Rating
loading...

Tyler Vid on Disagreement

We often like to ask lunch visitors what is their most absurd view (in the eyes of others).  Alas I have so many choices.  On BloggingHeads, Tyler Cowen answers this for Will Wilkinson:

Tyler: My most absurd belief, perhaps, is the extent to which I think people should be truly uncertain about almost all of their beliefs.  And it doesn’t sound absurd when you say it but I don’t on the other hand know anyone who agrees with it. … Take whatever your political beliefs happen to be.  Obviously the view you hold you think is most likely to be true, but I think you should give that something like 60-40, whereas in reality most people will give it 95 to 5 or 99 to 1 in terms of probability that it is correct.  Or if you ask people what is the chance this view of yours is wrong, very few people are willing to assign it any number at all.  Or if you ask people who believe in God or are atheists, what’s the chance you’re wrong – I’ve asked atheists what’s the chance you’re wrong and they’ll say something like a trillion to one, and that to me is absurd, that even if you think all of the strongest arguments for atheism are correct, your estimate that atheism is in fact the correct point of view shouldn’t be that high, maybe you know 90-10 or 95 to 5, at most.  So that maybe is my most absurd view.  Most things are much more up for grabs than we like to say they are.
Will: Yeah, I agree with you.
Tyler: No you can’t agree with me because its absurd.  I can agree with your absurd view, but you can’t agree with mine.   

Continue reading "Tyler Vid on Disagreement" »

GD Star Rating
loading...
Tagged as:

Are Meta Views Outside Views?

An inside view focuses on internals of the case at hand, while an outside view compares this case to other similar cases.  The less you understand about something the harder it is to apply either an inside or an outside view.  So the simplest approach would be to just do the best you could with each view and then combine their results in some simple way. 

Can we do better?  Perhaps, if we know something about when inside views tend to do better or worse, compared to outside views.   For example, we should probably emphasize views that give more confident estimates, and de-emphasize views from those biased by self-interest.   But do we know anything about on what topics to prefer an inside or outside view?

It is not clear to me that we really do know much about this.  But whatever framework we use to make this judgment, it seems to me to count as a meta-view, a view about views.  Furthermore, while it is easy to imagine useful outside meta-views, which compare this view-choice situation to other related view-choice situations, it is much harder to imagine useful inside meta-views, where you go through some detailed calculation to decide which view to prefer. 

This suggests to me that most useful meta views are outside meta views.  If you are going to reject an outside view in favor of an inside view on the basis of some insight on when inside views work better, you will be relying on an outside metaview.   So it seems you can’t escape embracing some outside view, though you might embrace a meta outside view instead of a basic outside view.

GD Star Rating
loading...
Tagged as:

Surface Analogies and Deep Causes

Followup toArtificial Addition, The Outside View’s Domain

Where did I acquire, in my childhood, the deep conviction that reasoning from surface similarity couldn’t be trusted?

I don’t know; I really don’t.  Maybe it was from S. I. Hayakawa’s Language in Thought and Action, or even Van Vogt’s similarly inspired Null-A novels.  From there, perhaps, I began to mistrust reasoning that revolves around using the same word to label different things, and concluding they must be similar?  Could that be the beginning of my great distrust of surface similarities?  Maybe.  Or maybe I tried to reverse stupidity of the sort found in Plato; that is where the young Eliezer got many of his principles.

And where did I get the other half of the principle, the drive to dig beneath the surface and find deep causal models?  The notion of asking, not "What other thing does it resemble?", but rather "How does it work inside?"  I don’t know; I don’t remember reading that anywhere.

But this principle was surely one of the deepest foundations of the 15-year-old Eliezer, long before the modern me.  "Simulation over similarity" I called the principle, in just those words.  Years before I first heard the phrase "heuristics and biases", let alone the notion of inside views and outside views.

Continue reading "Surface Analogies and Deep Causes" »

GD Star Rating
loading...

Parsing The Parable

The timing of Eliezer’s post on outside views, directly following mine on an outside view of singularity, suggests his is a reply to mine.  But instead of plain-speaking, Eliezer offers a long Jesus-like parable, wherein Plato insists that outside views always trump inside views, that it is obvious death is just like sleep, therefore that "our souls exist in the house of Hades." 

I did not suggest mine was the only or best outside view, or that it trumps any inside view of singularity. Reasonable people should agree inside and outside views are both valuable, and typically of roughly comparable value.  So if Eliezer thought my outside analysis was new and ably done, with a value typical of outside analyses, he might say "good work old boy, you’ve made a substantial contribution to my field of Singularity studies." 

Instead we must interpret his parable.  Some possibilities:

  • His use of Plato’s analogy suggests he thinks my comparison of a future AI revolution to the four previous suddenly growth rate jumps is no better motivated than Plato’s (to Eliezer poorly motivated) analogy.
  • His offering no other outside view to prefer suggests he thinks nothing that has ever happened is similar enough a future AI revolution to make an outside view at all useful.
  • His contrasting aerospace engineers’ success to schedulers’ failures in inside views, suggests he thinks he has access to inside views of future AIs whose power is more like aerospace engineering than project scheduling. 

Continue reading "Parsing The Parable" »

GD Star Rating
loading...
Tagged as: