This is our monthly place to discuss relevant topics that have not appeared in recent posts.
There has been some discussion about when the singularity (intelligence explosion) might occur, but I haven’t seen any discussion about whether the type of self-improving AI we’re talking about here is even theoretically possible. For some reason this whole area has a Godel’s Incompleteness theorem feel to it. But that’s just my instinct..
For example, I could build a machine that learns using, say, SVM. But it’s really only going to get substantially smarter than humans if it searches through the space of learning algorithms to find a better ways to learn. But the space of learning algorithms that it could consider is restricted by what humans could come up with. Possibly more important than finding better learning algorithms, though, is finding more data to learn from. This, again, would probably require greater-than-human creativity.
I guess I’m just wondering if there is a thought experiment or theory that demonstrates it’s possible. I would guess there is, I’m just not familiar with it. Thanks.
Evolution produced human intelligence with less-than-human creativity. It substituted lots of resources (time and bodies) for creative problem solving ability.
Is there any reason why evolution could not produce greater than human intelligence? Humans are smarter than evolution, so we should be able to create something smarter than humans faster than evolution could. I think it took evolution about 100 – 200 kyears to produce indisputably smarter variations on humans, so that’s a pretty low bar.
Also, finding better ways to learn than humans isn’t hard. How would you redesign your brain to make learning easier? We find it easy to learn those things that were important in the ancestral environment (lion avoidance, hunting, location of food) not those things that are useful today (calculus, programming, how to control appetite).
Considering that we can’t even define intelligence physically right now, I’d say the question is impossible to answer. We have a working definition and measure of human intelligence that works fairly well, but attempting to apply it beyond our species is fallacious.
I personally think most of the AI singularity discussion is a waste of time. We can’t even define its most general parameters. We really don’t know how to get from here to there or if it’s even possible.
You’ve got the demand for proof backwards.
Everything we know about the topic (AI, computers, psychology, biology, physics) suggests that self-improving AI is possible.
You’re misapplying Godel. It’s not relevant. Similarly, you seem to have an intuition that computers cannot possibly achieve more than their creators, which is demonstrably false. (E.g. chess playing programs.)
You ask if there is “a thought experiment or theory that demonstrates it’s possible”. You need to reverse your burden of proof. There is no thought experiment or theory which demonstrates that self-improving AI is impossible.
Since it is not known to be impossible, and since the benefits would be astronomical, people work on the idea.
Thanks, but it wasn’t a demand for proof. I was just looking for a reference, if there is one, to some intuition behind why it might be possible. Hook’s reasoning makes some sense, for example.
My intuition is not that computers cannot achieve more than their creators. My intuition was more along the lines of “if we had access to our own code, the improvements we could make are limited by the intelligence generated by that very code.” So I was having trouble picturing the next major intelligence jump. I suppose, though, that the change would be in small increments, theoretically at an increasing pace (although the latter is unclear, because the more intelligent the AI, the harder it is to find ways to get smarter).
I think AI development is a useful way for intelligent people to spend their time.
Jason asked: I was just looking for some intuition behind why it might be possible.
Maybe it would help to separate your quest into two parts:
1. Does the universe allow any physical entity to have intelligence significantly greater than current humans?
2. Is it possible for humans to build a device that self-improves to become the kind of entity in #1?
I’m not sure which of these concerns you most.
Re: I haven’t seen any discussion about whether the type of self-improving AI we’re talking about here is even theoretically possible.
Most of us just take that for granted. There are discussions out there, though – e.g. see the paper at:
What is your opinion on drug use?
Much of human decision making is biased by people being on drugs.
Should we accept it because of the positive results it can have (erdos and Mullis claim drug use helped their discoveries)?
This question comes from a podcast that discusses how many of histories decisions were influenced by leaders who would fail a drug test.
“Show 20 – (BLITZ) History Under The Influence
This first “Blitz Edition” of the show looks at the hidden side of history, the impact of drugs and alcohol on past events. Dan has a whole list of historical figures he wants to drug test.” which can be gotten here
I had an interesting conversation with John Nye about immigration after the GMU Public Choice seminar yesterday and it got me thinking quite a bit. His central argument seems to be that since the freedom of association implies the freedom of exclusion, it is logically inconsistent to argue for open immigration on purely libertarian grounds. I don’t think I’m perfectly willing to grant this point, since I don’t ultimately agree that anyone has any any rights of association on a national level, merely accidents of birth. Still, even if I were willing to grant his point, it forced me to think about what would be the best immigration policy generator. My first thought is that local labor boards could develop their own quotas, which could be submitted to state, then national levels, then distributed to embassies worldwide. These local organizations could set immigration criteria as they see fit (perhaps capped somehow?) and points of entry would be tied to the issuing authority. In-country migration can be as free as it is today.
For others reading, I have considered some possible adverse selection problems, but I tend to buy into the story that there is a natural upward bias on immigration, in that immigrants tend to be above the mean on intelligence, productivity, and lawfulness. I haven’t done much more than a perfunctory lit search on this, so it’s entirely possible that I could be wrong. I would welcome evidence to refute my priors, even if I would find it quite surprising.
Anyway, it got me to thinking about how you might approach immigration reform, Robin. I tried to imagine how to leverage prediction markets, but came up empty. What are your thoughts?
Libertarians say you can exclude folks from your community, but communities should be voluntary, so nations are not communities.
I had some thoughts on my ideal system of immigration here.
> It would be extremely difficult to try and stop immigration from our southern border
Seriously? It would be politically difficult, yes. But when the will to do it was there, Eisenhower was able to accomplish it effectively. It could probably be done even more efficiently using a national citizenship/visa card. If I show up at your restaurant or broccoli farm and some worker doesn’t have one, you’re going to be fined and the worker repatriated. Intrusive, yes. But we already have authorities poking around restaurants to look for unsafe food handling and lax enforcement of alcohol laws. I would guess that the IRS can also already look at the payrolls of any business, but I’m not sure.
Just a technical issue, but it’s really been annoying for a while. To access my browser’s address bar, I usually hit alt-D. It’s a very convenient way of getting to other sites. However, on this site (and this site only AFAIK), hitting alt-D won’t do that; instead it makes a del-tag appear in the comment box.
If you could do something about this, that would be mucho appreciado.
 Not an actual Spanish phrase.
I’m interested in the fail points for major techs.
For example, what’s the most likely contender for problem we can’t solve to make wet nanotech, dry nanotech, EMs, computing beyond flat integrated circuits, etc.
That is, what should we have our eye on so that we will know to lower our estimates that these techs will be coming any time soon.
> I tend to buy into the story that there is a natural upward bias on immigration, in that immigrants tend to be above the mean on intelligence, productivity, and lawfulness
Plausible that they are above average for their own population. Not plausible that all nations have the same average values for these traits. Not plausible that any known force can shift these average values by all that much (they can certainly be shifted a little bit). It’s been tried many times by researchers, and laudably so. It would be great if US whites could have the 112 mean IQ of Ashkenazim. IQ scores correlate with job performance and are not biased against whites: the questions that whites find challenging or easy are the same ones Ashkenazim find, respectively, challenging or easy. No wonder then that Ashkenazim are about 15x “overrepresented” in the professoriate, in the richest 500 Americans, among American novelists, etc (they make up ~33% of all elite groups, but are ~2% of the population). Overrepresented is of course a semi-misnomer, unless one mistakenly believes that they have the same average traits as others.
Actually, eugenics definitely could bring American whites to a mean 112 IQ, and change other trait averages. I’m describing, not necessarily prescribing. But no other known force can do the same.
Plausible that you will get in trouble if you draw attention to the above facts.
This is a selfish question. I’m considering moving into a 7 bedroom house with 10 people. The house is priced at $6200 a month. What is an economically efficient way to price the rooms? Some sort of auction I assume, but it gets tricky because the entire house is priced so the more one room costs the less every other room costs.
Auction the rooms, then use this:
auction price of room / total value of all room auctions = [ room rent ] / actual house rent
to recalculate based on actual rent.
I find your auction really tricky and weird insofar as the bid may not be what I actually pay, and I can’t figure out what bid I would make. If it weren’t a big deal to anyone, it might work out OK. But if I really wanted Nice Room 1 but had an inelastic maximum of dollars I could pay, I wouldn’t know how to bid. Likewise if I was rather determined to avoid Small Room 3, but didn’t want to pay too much. Also, is your auction open and non-simultaneous? If so there might be runaway ‘inflation’ or ‘deflation’ as the thing goes on, or am I wrong about that? If it is a secret auction and simultaneous, what do you do about people winning multiple rooms or zero rooms?
Efficient, but not fair, using Coase’s Theorem:
Assign rooms randomly (or allowing by picking in a random order), allow people to bribe to switch rooms.
Alternately, forget about the total rent figure. Each person shares that equally. Then auction the rooms, with the sum payed to everyone else. Each person agrees to pay x dollars to each other person (per month or whatever) if they get a specific room; the high bid for each room wins.
Or auction pick order, to prevent the “two rooms, one person” problem.
Use Sperner’s Lemma. Essentially, one can prove that there is some way of assigning rents to rooms so that everyone prefers a different room. (there is no guarantee of economic efficiency in the usual sense, but that’s not necessarily what you’re aiming for). I’ve never actually used the applet, but it might prove useful.
I’ve tried repeatedly to do this using google and OB’s search function, but I’ve been unable to find the answer, so if anyone can help me out, it’s much appreciated.
I remember reading on this blog something to the effect of “visiting a doctor had a negative expected value until about the year 1900.” I don’t remember a source for it, and because of the many posts that use similar terminology I’ve been unable to track down the particular post. Does anyone remember what I’m talking about, or, alternately, have a source for something like this?
Robin Hanson and John Delaney talking prediction markets: http://english.aljazeera.net/programmes/rizkhan/2010/04/20104871331248330.html
Playing around with horse racing (sports predictions markets) a lot at Betfair, I finally fully understood the flaw in Bayes, and the difference between near and far modes.
I noticed that far from a big race, the prices on offer are much poorer -the ‘book’ is well above 100% for backing(betting for), and well below 100% for Laying (betting against). Near a big race, the prices are much better, and the book begins to convergence on 100%.
I’ve observed the same thing for horses which are unknown qualities – foreign horses for instance, or horses running for the first time (maidins). Punters think them as far abstractions, and the odds are gnerally poor, whereas for horses that are known qualities (lots of info avaliable), they are thought of as near and the prices are generally better.
It seems that when thinking in far mode, punters badly miscalibrate probabilities in a very particular way. When thinking of how to take advantage of this, I suddeny had a casade of insights.
Here’s the horse racing gambling insight: In far mode (a long way out from a race), you don’t want to take the odds on offer. The only way to win is to be the market maker and be setting the odds yourself. You should back at the lay price, and lay at the back price
And here’s the generalized insight:
Far mode is the market maker and near mode is only deciding on the accuracy of the prices
And applied to Bayes, decision theory and logic:
The flaw in Bayes is that is can only decide on the accuracy of the probabilities, it is not a market maker. Even decision theory can be redefined in purely passive terms, as a predefined set of actions which may or not be enacted. categorization on the other hand, it creative, it changes the very parameters on which decision making is based. It does not passively calcuate odds, instead it makes the odds.
Bayes (near mode thought) is the sucker punter that can only accept or reject the odds, but Categorization (far mode thought) is the bookie and ultimate market marker
And yes, my insights do work. I win.
… be a charity angel.