Author Archives: Robin Hanson

How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Can’t Stop Lecturing

Imagine a not-beloved grade school teacher who seemed emotionally weak to his students, and was fastidious about where exactly everything was on his desk and in his classroom. If the students moved things around when the teacher wasn’t looking, this teacher would seem disrupted and give long boring lectures against such behavior. This sort of reaction might well encourage students to move things, just to get a rise out of the teacher.

Imagine a daughter who felt overly controlled and under considered by clueless parents, and who was attracted to and tempted to get involved with a particular “bad boy.” Imagine that these parents seemed visibly disturbed by this, and went out of their way to lecture her often about why bad boys are a bad idea, though never actually telling her anything she didn’t think she already knew. In such a case, this daughter might well be more tempted to date this bad boy, just to bother her parents.

Today a big chunk of the U.S. electorate feels neglected by a political establishment that they don’t especially respect, and is tempted to favor political bad boy Donald Trump. The main response of our many establishments, especially over the last few weeks, has been to constantly lecture everyone about how bad an idea this would be. Most of this lecturing, however, doesn’t seem to tell Trump supporters anything they don’t think they already know, and little of it acknowledges reasonable complaints regarding establishment neglect and incompetence.

By analogy with these other cases, the obvious conclusion is that all this tone-deaf sanctimonious lecturing will not actually help reduce interest in Trump, and may instead increase it. But surely an awful lot of our establishments must be smart enough to have figured this out. Yet the tsunami of lectures continues. Why?

A simple interpretation in all of these cases is that people typically care more about making sure they are seen to take a particular moral stance than they care about the net effect of their lectures on behavior. The teacher with misbehaving students cares more about showing everyone he has a valid complaint than he does about reducing misbehavior. The parents of a daughter dating a bad boy care more about showing they took the correct moral stance than they do about whether she actually dates him. And members of the political establishment today care more about making it clear that they oppose Trump than they do about actually preventing him from becoming president.

GD Star Rating
a WordPress rating system
Tagged as: ,

Against DWIM Meta-Law

Smart capable personal assistants can be very useful. You give them vague and inconsistent instructions, and they “do what I mean” (DWIM), fixing your mistakes. If you empower them to control your interactions, you need less fear mistakes messing up your interactions.

But one thing a DWIM personal assistant can’t help you so much with is your choice of assistants. If assistants were empowered to use DWIM on your choice to fire them, they might tend to decide you didn’t really mean to fire them. So if you are to have an effective choice of assistants, and thus effective competition among potential assistants, then those same assistants can’t protect you much from possible mistakes in your meta-choices regarding assistants. They can protect you from other choices, but not that choice.

The same applies to letting people choose what city or nation to live in. When people live in a nation then that national government can use regulation to protect them from making many mistakes. For example, it can limit their legally available options of products, services, and contracts. But if people are to have an effective choice to change governments by changing regions, then such governments can’t use regulation much to protect people from mistakes regarding region choice. After all, a government authorized to declare your plan to move away from it to be a mistake can stop you from rejecting it.

Similarly we can elect politicians who pass laws to protect us from many mistakes. But if we are to have an effective choice of politicians to represent us, then they can’t protect us much from bad choices of politicians to represent us. We can’t let our current elected leaders much regulate who we can elect to replace them, if we are to be able to actually replace them.

I’ve long been intrigued by the idea of private law, wherein people can stay in the same place but contract with different legal systems, which then set the rules regarding their legal interactions with others. Such rules might in effect change the laws of tort, crime, marriage, etc. that people live under. And so such competition between private laws might push the law to evolve toward more efficient laws.

One of the things that legal systems tend to do is to protect people from mistakes. For example, contract law won’t enforce contracts it sees as mistakes, and it fills in contract holes it sees resulting from laziness. Law is often DWIM law. Which can be great when you trust your law to choose well. But if one is to have an effective choice of private law, and real competition for that role, then one’s current law shouldn’t be able to overrule one’s choice of a new law. Instead, one’s choice of a private legal system, like one’s choice of nation, needs to be a simple clear choice where one is not much protected from mistakes.

Today we don’t in fact have such private law, because our standard legal system won’t enforce contracts we sign that declare our intent to use different legal systems. To achieve private law, we’d need to change this key feature of our standard legal system.

Your choice to change nations, either for temporary travel or for permanent moves, can be a big mistake. It might result from temporary mood fluctuations, or from misunderstandings about the old nation or the new. Nevertheless we have little regulation of such choices. Instead individuals are mostly fully exposes to their possible mistakes. For example, while Europe is heavily regulated in general, European teens today can decide to go join ISIS, even when many others greatly regret such choices. We disapprove of nations that prevent people from leaving because that cuts competition between nations to serve people.

Similarly, if we want completion between legal systems without forcing people to move, we’ll have to change our law to accept our not protecting people from bad choices of legal systems. There will have to be a simple clear act by which one chooses a law, a choice not much subject to legal review and reversal. We’d want to encourage people to take such choices seriously, but then to accept the choices they make. Freedom of choice requires a freedom to make mistakes. For big choices, those can be big mistakes.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Scared, Sad, Angry, Bitter

These four emotions: scared, sad, angry, and bitter, all suggest that one has suffered or will suffer a loss. So all of them might inspire empathy and help from others. But they don’t do so equally. Consider the selfish costs of expressing empathy for these four emotions.

While a scared person hasn’t actually suffered a loss yet, the other kinds of feelings indicate that an actual loss has been suffered. So the scared person is not yet a loser, while the others are losers. When there are costs with associating with losers, those costs are lowest for the scared. For example, if it takes real resources to help someone who has suffered a loss, the scared person is less likely to need such resources.

People who are angry or bitter blame particular other people for their loss. So by expressing empathy with or helping such people, you risk getting involved in conflicts with those other people. In contrast, helping people who are just sad less risks getting you into conflicts.

People who are angry tend to think they have a substantial chance of winning a conflict with those they blame for their loss. Anger is a more visible emotion that drives one more toward overt conflict. Angry people are visibly trying to recruit others to their fight.

In contrast, bitter people tend to think they have little chance of winning a overt conflict, at least for now. So bitter people tend to fume in private, waiting for their chance to hit back unseen. If you help a bitter person, you may get blamed when their hidden attacks are uncovered, and your support may tempt them to become angry and start an overt fight. So by helping a bitter person, you are more likely to be on the losing end of a conflict.

These considerations suggest that our cost of empathizing with and helping people with these emotions increases in this order: scared, sad, angry, and bitter. And this also seems to describe the order in which we actually feel less empathy; we feel less empathy when its costs are higher.

Note that this same order also describes who has suffered a larger loss, on average. Scared people expect to suffer the smallest loss, while bitter people suffer the largest loss. (Ask yourself which emotion you’d rather feel.) So our willingness to express empathy with those who suffer a loss is inverse to the loss they suffer. We empathize the most with those who suffer the least. Because that is cheapest.

Thanks to Carl Shulman for pointing out to me the social risks of helping bitter folk, relative to sad folk.

Added 18Feb: Interestingly, many lists of emotions don’t include bitterness or an equivalent. It is as if we’d like to pretend it just doesn’t exist.

GD Star Rating
a WordPress rating system
Tagged as: ,

Does Money Ruin Everything?

Imagine someone said:

The problem with paying people to make shoes is that then they get all focused on the money instead of the shoes. People who make shoes just because they honestly love making shoes, and who aren’t paid anything at all, make better shoes. Once money gets involved people lie about how good their shoes are, and about which shoes they like how much. But without money involved, everyone is nice and honest and efficient. That’s the problem with capitalism; money ruins everything.

Pretty sad argument, right? Now read Tyler Cowen on betting:

This episode is a good example of what is wrong with betting on ideas. Betting tends to lock people into positions, gets them rooting for one outcome over another, it makes the denouement of the bet about the relative status of the people in question, and it produces a celebratory mindset in the victor. That lowers the quality of dialogue and also introspection, just as political campaigns lower the quality of various ideas — too much emphasis on the candidates and the competition. Bryan, in his post, reaffirms his core intuition that labor markets usually return to normal pretty quickly, at least in the United States. But if you scrutinize the above diagram, as well as the lackluster wage data, that is exactly the premise he should be questioning. (more)

Sure, relative to ideal people who only discuss and think about topics with a full focus on and respect for the truth and their disputants, what could be the advantage of bets? Money will only distract them from studying truth, right?

But just because people don’t bet doesn’t mean they don’t have plenty of other non-truth-oriented incentives and interests. They are often rooting for positions, and celebrating some truths over others, due to these other interests. Bet incentives are at least roughly oriented toward speaking truth; the other incentives, not so much. Don’t let the fictional best be the enemy of the feasible-now good. For real people with all their warts, bets promote truth. But for saints, yeah, maybe not so much.

GD Star Rating
a WordPress rating system
Tagged as:

A Bet I’d Have Lost

Three and a half years ago I made my largest personal donation ever to the Brain Preservation Foundation, to help fund their Brain Preservation Prizes. Just now they’ve announced that 21st Century Medicine has won their $26,735 Small Mammal Brain Preservation Prize, using a more cryonics-based approach. The other main competitor, Mikula, used the “plastination” approach I favored back then:

I offer to bet up to $5K that plastination is more likely to win this full prize than cryonics. (more)

Good thing for me no one accepted my offer; now it looks more like I’d have lost it. Next we’ll see who wins the Large Mammal Brain Preservation Prize, and when.

GD Star Rating
a WordPress rating system
Tagged as:

Why I Lean Libertarian

Imagine that one person, or a small group, wants to do something, like watch pornography, do uncertified medical procedures, have gay sex, worship Satan, shoot guns, drink raw milk, etc. Imagine further that many other people outside that small group don’t want them to do this. They instead want the government to make a law prohibiting similar groups from doing similar things.

In this prototypical situation, libertarians tend to say “let them do it” while others say “have the government make them stop.” If we take a cost-benefit perspective here, then the key question here is whether this small group gains more from their activity (or an added increment of it) than others lose (including losing via their “altruistic” concern for the small group). Since this small group would choose to do it if allowed, we can presume they expect to gain something. And if others complain and try to make them stop (or cut back), we can presume they expect to lose. So we are trying to estimate the relative magnitude of these two effects.

I see three considerations that, all else equal, lean this choice in the libertarian direction.

  •  Law & Government Are Costly – It will take real resources to create and enforce a law to ban this activity. We’ll have to negotiate the wording of this law, and then tell people about it. People will complain about violations, and then we’ll have to adjudicate those complaints, and punish violators. We’ll make mistakes in which laws to create, who to punish, and how to manage the whole process. More rules will discourage innovation, and invite more lobbying. All of which is costly.
  • Local Coordination Might Work – If people do something that hurts those around them more, often those nearby others can coordinate to discourage them via contract and freedom of association. If playing your music loud bothers folks in the apartment next door, your common landlord can set rules to limit your music volume. And kick you out if you don’t follow his rules. The more ways that smaller organizations could plausibly solve a problem, the less likely we need central government to get involved.
  • Lawsuits Might Work – Legal systems have well-established processes whereby some people can sue others, claiming that the actions of those others have hurt them. Suit losers must pay, discouraging the activity. Yes, people harmed can need to coordinate to sue together, and yes legal systems tend to demand relatively concrete evidence of real harm, and that the accused caused that harm. It might be hard to figure out who to accuse, the accused might not have enough money to pay, and the legal process might be too expensive to make it worth bothering. But again, the more situations where the law could plausibly solve the problem, the less likely that we need extra government involvement.

Again, each of these considerations leans the conclusion in a libertarian direction, all else equal. Yes, they can collectively be overcome by strong enough other considerations that lean the other way. For example, I’ll grant that for the case of air pollution, we plausibly have strong enough evidence of large harms on outsiders, harms insufficiently discouraged by local coordination and lawsuits. So yes in this case central government might be an attractive solution, if it can act cheaply and efficiently enough.

But the main point here is that the three considerations above justify a libertarian default that must be overcome by specific arguments to the contrary. If outsiders complain about an activity, but aren’t willing to buy less of it via contract, or to sue for less of it in court, maybe they aren’t really being hurt that much. There is an asymmetry here: if we don’t ban an activity and might get too much, contract & law could reduce it a lot, but if we ban an activity and might get too little, contract & law can’t increase it much.

Yes, other persuasive contrary considerations might be found, including considerations not based on the net harm of the disputed actions. But the less you think you know about these other considerations, the more your choice will be influenced by these three basic considerations, all of which seem to me pretty solid.

While I have said before that I am not a libertarian according to common strict definitions, I still usually tend to lean libertarian, because in fact arguments based on further considerations often seem to me pretty weak. While one can often make clever arguments, it is often hard to have much confidence in them; the world seems just too complex. And so I often have to fall back on simple defaults. Which, as I’ve argued above, are libertarian.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

My Circus Sideshow

If a fossil of an an alien, or any alien artifact, were put it on display, it would attract millions. Sure some would see it because of its objective importance. But most would come just because it is weird.

People used to see a traditional circus sideshow for similar reasons. But consider: once you know that there exist dwarves, sword swallowers, and women with beards, what do you learn more by seeing them person? Yes, in part you just want to brag about how much you’ve seen, but you are also actually curious about what such things look like up close.

Circus side shows are weird, but they are also far from maximally strange. Many ocean creatures are far stranger. The attraction is in part a mixture of the strange and familiar. Once a familiar thing has changed in one very big way, one naturally wonders what other aspects of it are changed and how. One doesn’t wonder that about something where all its features are strange.

Tyler Cowen suggests this as the appeal of my upcoming book The Age of Em: Work, Love, and Life When Robots Rule the Earth:

The ostensible premise of the book is that people have become computer uploads, and we have an entirely new society to think about: how it works, what problems it has, and how it evolves. .. But this is more than just a nerdy tech book, it is also:

  • Straussian commentary on the world we actually live in. ..
  • A reminder of how strange everything is .. It’s a mock of all those who believe in individual free will.
  • An attempt to construct a fully rational theology ..
  • An extended essay on the impossibility of avoiding theology ..
  • A satire on the rest of social science, and how we try to explain and predict the future.
  • A meta-level growth model in which energy alone matters and the “fixed factor” assumptions of other models are relativized. ..
  • A challenge to our notions of wherein the true value of a life resides. (more)

I describe an entire world in great detail, a world that is a mix between a strange alien civilization and our familiar world. Any world described in enough detail must raise issues of that look like theology, including free will and where true value resides. And any detailed strange yet familiar world can be seen as satire on social science and Straussian commentary on our world.

So the key is that, like a circus side show, my book lets readers see something strange yet familiar in great detail, so they can gawk at what else changes and how when familiar things change. My book is a dwarf, sword swallower, and bearded lady, writ large.

Okay, yeah, I can accept that will be the main appeal of my book. Just as the main appeal of seeing an alien fossil to most would be its strangeness. Even if understanding aliens were actually vitally important.

GD Star Rating
a WordPress rating system
Tagged as: ,

Me in Budapest

Next Friday January 29 (8pm), I’ll speak on When Robots Rule the Earth in Budapest, at Palack Borbár at the “8th Thalesians Séance.” Following my talk a panel discussion will “discuss and challenge his ideas.”

Added 2Feb: A video of the talk is now up here.

GD Star Rating
a WordPress rating system
Tagged as:

Here Be Dragons

In his new book Here Be Dragons: Science, technology, and the future of humanity, Olle Haggstrom mostly discusses abstract and philosophical issues. But at one point in the book he engages the more specific forecasts I discuss in my upcoming book. So let me quote him and offer a few responses:

Once successful whole-brain emulation has been accomplished, it might not be long before it becomes widely available and widely-used. This bring us to question (4) – what will society be like when uploading is widely available? Most advocates of an uploaded posthuman existence, such as Kurzweil and Goertzel, point at the unlimited possibilities for an unimaginably (to us) riche and wonderful life in ditto virtual realities. One researcher stands out from the rest in actually applying economic theory and social science to attempt to sketch how a society of uploads will turn out is the American economist Robin Hanson, beginning in a 1994 paper, continuing with a series of posts on his extraordinary blog Overcoming Bias, and summarizing his findings (so far) in a chapter in Intelligence Unbound and in an upcoming book.

Two basic assumptions for Hanson’s social theory of uploads are
(i) that whole-brain emulation is achieved mostly by brute force, with relatively little scientific understanding of how thoughts and other high-level phenomena supervene on the lower-level processes that are simulated, and
(ii) that current trends of hardware costs decreasing at a fast exponential rate will continue (if not indefinitely then at least far into the era he describes).

Actually, I just need to assume that at some point the hardware cost is low enough to make uploads substantially cheaper than human workers. I don’t need to make assumptions about rates at which hardware costs fall.

Assumption (i) prevents us from boosting the emulated minds to superhuman intelligence levels, other than in terms of speed, by transferring the mot faster hardware. Assumption (ii) opens up the possibility for quickly populating the world with billions and then trillions of uploaded minds, which is in fact what Hanson predicts will happen. ..

Actually, population increases quickly mainly because factories can crank out an amount of hardware equal to their own economic value in a short time – months or less.

Decreases in hardware costs will push down wages. .. This will send society to the classical Malthusian trap in which population will grown until it is hit by starvation (uploaded minds will not need food, of course, but things like energy, CPU time and disk space). ..

There are many exotica in Hanson’s future. One is that uploads can fun on different hardware and thus at different speeds, depending on circumstances. .. Even more exotic is the idea that most work will be done by short-lived so-called spurs, copied from a template upload to work for, say, a few hours and then be terminated (i.e., die). .. Will they not revolt? The question has been asked, but Hanson maintains that “when life is cheap, death is cheap.”

First, spurs could retire to a much slower speed instead of ending. Second, just before an em considers whether to split off a spur copy for a task, that em can ask itself if it would be willing to do that assigned task if it found itself a few seconds later to be the spur. Ems should quickly learn to reliable estimate their own willingness, so they just won’t split off spurs if they estimated a high chance that the spur would become troublesome. Maybe today we find it hard to estimate such things, but they’d know their world well so it would an easy question for them. So I just can’t see spur rebellion as a big practical problem, any more than we have a big problem planning to go to work for the day and then suddenly going to the movies instead.

The future outlined in Hanson’s theory of uploaded minds may seem dystopian .. but Hanson does not accept this label, and his main retorts seem to be twofold. First, population numbers will be huge, which is good if we accept that the value of a future should be measured .. by the total amount of well-being, which in a huge population can be very large even if each individual has only a modest positive level of well-being. Second, the trillions of short-lived uploaded minds working hard for their subsistence right near starvation level can be made to enjoy themselves, e.g., by cheap artificial stimulation of their pleasure center.

I don’t think I’ve ever talked about “cheap artificial stimulation of their pleasure center.” I instead say that most ems work and leisure in virtual worlds of spectacular quality, and that ems need never experience hunger, disease, or intense pain, nor ever see, hear, feel, or taste grime or anything ugly or disgusting. Yes they’d work most of the time but their jobs would be mentally challenging, they’d be selected for being very good at their jobs, and people can find deep fulfillment in such modes. We are very culturally plastic, and em culture would promote finding value and fulfillment in typical em lives. In addition, I estimate that most humans who have ever lived have had lives worth living, in part because of this cultural plasticity.

Then there’s the issue of whether and to what extent we should view Hanson’s analysis as a trustworthy prediction of what will actually happen. A healthy load of skepticism seems appropriate. .. It also seems that he works so far outside of the comfort zones of where economic theory has been tested empirically, and uses so many explicit and implicit assumptions that are open to questioning, that his scenarios need to be taken with a grain of salt (or a full bushel).

You could say this about any theoretical analysis of anything not yet seen. All theory requires you to make assumptions, and all assumptions are open to questioning. Perhaps my case is worse than others, but the above certainly doesn’t show that to be the case.

One obvious issue to consider is whether society following a breakthrough in the technology will be better or worse than society without such a breakthrough. The utopias hinted at by, e.g., Kurzweil and Goertzel seem pretty good, whereas Hanson’s Malthusian scenario looks rather less appealing.

But Kurzweil and Goertzel offer inspiring visions, not hard-headed social science analysis. Of course that will sound better.

GD Star Rating
a WordPress rating system
Tagged as: