No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
Tagged as: ,

Power Corrupts, Slavery Edition

I’ve just finished reading a 1980 book Advice Among Masters: The Ideal in Slave Management in the Old South, which mostly quotes US slave owners from the mid 1800s writing on how to manage slaves. I really like reading ordinary people describe their to-me-strange worlds in their own words, and hope to do more of it. (Suggestions?)

This book has made me rethink where the main harms from slavery may lie. I said before that slaves were most harmed during and soon after capture, and that high interest rates could induce owners to work slaves to an early death. But neither of these apply in the US South, where the main harm had seemed to me to be from using threats of pain to induce more work on simple jobs.

However, this book gives the impression that most threats of pain were not actually directed at making slaves work harder. Slaves did work long hours, but then so did most poor European workers around that time. Slave owners didn’t actually demand that much more work from those capable of more work, instead tending to demand similar hours and effort from all slaves of a similar age, gender, and health.

What seems instead to have caused more pain to US south slaves was the vast number of rules that owners imposed, most of which had little direct connection to key problems like shirking at work, stealing, or running away. Rules varied quite a bit from owner to owner, but there were rules on where and when one could travel, times to rise and sleep, who could marry and live with who, who could talk to who when, when and how to wash bodies and houses, what clothes to wear when, who can cook, who can eat what foods, who goes to what sorts of churches when, and so on. Typical rules for slaves had much in common with typical “upstanding behavior” rules widely imposed by parents on their children, and by schools and armies on students and soldiers: eat well, rise early, keep clean, say your prayers, don’t drink, stay nearby, talk respectfully, don’t fraternize with the wrong people, etc.

With so many rules that varied so much, a standard argument against letting slaves visit neighboring plantations was that they’d less accept local rules if they learned of more lenient rules nearby. And while some owners emphasized enforcing rules via scoldings, fines, or reduction of privileges, most often violations were punished with beatings.

Another big cause of pain seems to have been agency failures with overseers, i.e., those who directly managed the slaves on behalf of the slave owners. Owners of just a few slaves oversaw them directly, and many other owners insisted on personally approving any punishments. However still others gave full discretion to overseers and refused to listen to slave complaints.

Few overseers had a direct financial stake in farm profitability, and many owners understood that such stakes would tempt overseers, who changed jobs often, to overwork slaves in the short run at the expense of long run profitability. Even so, short run harvest gains were usually easier for owners to see than long run harm to slaves, tempting overseers to sacrifice the former for the latter. And even if most overseers were kept well in line, a small fraction who used their discretion to beat and rape could impose high levels of net harm.

US south slave plantations were quite literally small totalitarian governments, and the main harms to such slaves seems to parallel the main libertarian complaints about all governments. A libertarian perspective sees the following pattern: once one group is empowered to run the lives of others, they tend to over-confidently over-manage them, adding too many rules that vary too much, rules enforced with expensive punishments. And such governments tend to give their agents too much discretion, which such agents use too often to indulge personal whims and biases. Think abusive police and an excess prison population today. Such patterns might be explained by an unconscious human habit of dominance via paternalism; while dominant groups tend to justify their rules in terms of helping, they are actually more trying to display their dominance.

Now one might instead argue that the usual “good behavior” rules imposed by parents, schools, militaries, and slave owners are actually helpful on average, turning lazy good-for-nothings into upright citizens. And in practice formal rule systems are so limited that agent discretion is needed to actually get good results. And strong punishments are needed to make it work. Spare the rod, and spoil the child, conscript, or slave. From this perspective, US south slave must have led decent lives overall, and we should be glad that improving tech is making it easier for modern governments to get involved in more details of our lives.

Looking to the future, if totalitarian management of individual lives is actually efficient, a more competitive future world would see more of it, leading widely to effective if not official slavery. Mostly for our own good. (This fear was common early in the industrial revolution.) But if the libertarians are right, and most dominant groups tend to make too many overly-harsh rules at the expense of efficiency, then a more competitive future world would see less such paternalism, including fewer slave-like lives.

GD Star Rating
Tagged as: , ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
Tagged as: , ,

Chace on Age of Em

Soon after I reviewed Calum Chace’s book, he reviewed mine:

I can’t remember ever reading a book before which I liked so much, while disagreeing with so much in it. This partly because the author is such an amiable fellow. .. The writing style is direct, informal and engaging ..  And the book addresses an important subject: the future.

As we disagree on much, I’ll just jump in and start replying.

Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic.

Given two events, my estimating that one is more likely to happen first seems to me no more dogmatic than Chace estimating the opposite.

Because of this claim, he is highly critical of the view that technological unemployment will be widespread in the next few decades. Fair enough, he might be right, but obviously I doubt it. He is also rather dismissive of major changes in society being caused by virtual reality, augmented reality, the internet of things, 3D printing, self-driving cars, and all the other astonishing technologies being developed and introduced as we speak.

I don’t dismiss such changes; they are welcome, and some will happen and matter. I just don’t see them as sufficient reason to think “this time is different” regarding massive job loss; the past saw changes of similar magnitudes.

He seems to think that when the first ems are created, they will very quickly be perfect replications of the target human minds. It seems to me more likely that we will create a series of approximations of the target person.

The em era starts when ems are cheaper than humans for most jobs. Yes of course imperfect emulations come first, but they are far less useful on most jobs. Consider that humans under the influence of recreational drugs are really quite good emulations of normal humans, yet they are much less valuable on most jobs. So emulations need to be even better than that to be very useful.

The humans in this world are all happy to be retired, and have the ems create everything they need. I think the scenario of radical abundance is definitely achievable, but I don’t think it’s a slam dunk, and I would imagine much more interaction – good and bad – between ems and humans than Robin seems to expect.

I don’t understand what kinds of interaction Chace thinks I expect less than he does here.

A couple of smaller but important comments. Robin thinks ems will be intellectually superior to most humans, not least because they will be modelled on the best of us. He therefore thinks they will be religious. Apart from the US, always an exceptional country, the direction of travel in that regard is firmly in the other direction.

In the book I gave citations on religious behavior correlating with work productivity. If someone has contrary citations, I’m all ears.

And space travel. Robin argues that we will keep putting off trying to colonise the stars because whenever you send a ship out there, it would always be overtaken by a later, cheaper one which benefits from better technology. This ignores one of the main reasons for doing it: to improve our chances of survival by making sure all our eggs aren’t in the one basket that is this pale blue dot.

I didn’t say no one would go into space; I pointed out that high interest rates discourage all long term projects, all else equal, including space projects.

GD Star Rating

Regulating Self-Driving Cars

Warning: I’m sure there’s a literature on this, which I haven’t read. This post is instead based on a conversation with some folks who have read more of it. So I’m “shooting from the hip” here, as they say.

Like planes, boats, submarines, and other vehicles, self-driving cars can be used in several modes. The automation can be turned off. It can be turned on and advisory only. It can be driving, but with the human watching carefully and ready to take over at any time. Or it can be driving with the human not watching very carefully, so that the human would take a substantial delay before being able to take over. Or the human might not be capable of taking over at all; perhaps a remote driver would stand ready to take over via teleoperation.

While we might mostly trust vehicle owners or passengers to decide when to use which modes, existing practice suggest we won’t entirely trust them. Today, after a traffic accident, we let some parties sue others for damages. This can improves driver incentives to drive well. But we don’t trust this to fully correct incentives. So in addition, we regulate traffic. We don’t just suggest that you stop at a red light, keep in one lane, or stay below a speed limit. We require these things, and penalize detected violations. Similarly, we’ll probably want to regulate the choice of self-driving mode.

Consider a standard three-color traffic light. When the light is red, you are not allowed to go. When it is green you are allowed, but not required, to go; sometimes it is not safe to go even when a light is green. When the light is yellow, you are supposed to pay extra attention to a red light coming soon. We could similarly use a three color system as the basis of a three-mode system of regulating self-driving cars.

Imagine that inside each car is a very visible light, which regulators can set to be green, yellow or red. When your light is red you must drive your car yourself, even if you get advice from automation. When the light is yellow you can let the automation take over if you want, but you must watch carefully, ready to take over. When the light is green, you can usually ignore driving, such as by reading or sleeping, though you may watch or drive if you want.

(We might want a standard way to alert drivers when their color changed away from green. Of course we could imagine adding more colors, to distinguish more levels of attention and control. But a three level system seems a reasonable place to start.)

Under this system, the key regulatory choice is the choice of color. This choice could in principle be set different for each car at each moment. But early on the color would probably be set the same for all cars and drivers of a type, in a particular geographic area at a particular time. The color might come from in part a broadcasted signal, with the light perhaps defaulting to red if it can’t get a signal.

One can imagine a very bureaucratic system to set the color, with regulators sitting in a big room filled with monitors, like NASA mission control. That would probably be too conservative and fail to take local circumstances enough into account. Or one might imagine empowering fancy statistical or machine learning algorithms to make the choice. But most any algorithm would make a lot of mistakes, and the choice of algorithm might be politicized, leading to a poor choice.

Let me suggest using prediction markets for this choice. Regulators would have to choose a large set of situation buckets, such that the color must be the same for all situations in the same bucket. Then for each bucket we’d have three markets, estimating the accident rate conditional on a particular color. Assuming that drivers gain some direct benefit from paying less attention to driving, we’d set the color to green unless the expected difference between the green and yellow accident rate became high enough. Similarly for the choice between red and yellow.

Work on combinatorial prediction markets suggests that it is feasible to have billions or more such buckets at a time. We might use audit lotteries and only actually estimate accident rates for some small fraction of these buckets, using bets conditional on such auditing. But even with a much smaller number of buckets, our experience with prediction markets suggests that such a system would work better than either a bureaucratic or statistical system with a similar number of buckets.

Added 1p: My assumptions were influenced by the book Our Robots, Ourselves on the history of automation.

GD Star Rating
Tagged as: , ,

Economic Singularity Review

The Economic Singularity: Artificial intelligence and the death of capitalism .. This new book from best-selling AI writer Calum Chace argues that within a few decades, most humans will not be able to work for money.

A strong claim! This book mentions me by name 15 times, especially on my review of Martin Ford’s Rise of the Robots, wherein I complain that Ford’s main evidence for saying “this time is different” is all the impressive demos he’s seen lately. Even though this was the main reason given in each previous automation boom for saying “this time is different.” This seems to be Chace’s main evidence as well:

Faster computers, the availability of large data sets, and the persistence of pioneering researchers have finally rendered [deep learning] effective this decade, leading to “all the impressive computing demos” referred to by Robin Hanson in chapter 3.3, along with some early applications. But the major applications are still waiting in the wings, poised to take the stage. ..

It’s time to answer the question: is it really different this time? Will machine intelligence automate most human jobs within the next few decades, and leave a large minority of people – perhaps a majority – unable to gain paid employment? It seems to me that you have to accept that this proposition is at least possible if you admit the following three premises: 1. It is possible to automate the cognitive and manual tasks that we carry out to do our jobs. 2. Machine intelligence is approaching or overtaking our ability to ingest, process and pass on data presented in visual form and in natural language. 3. Machine intelligence is improving at an exponential rate. This rate may or may not slow a little in the coming years, but it will continue to be very fast. No doubt it is still possible to reject one or more of these premises, but for me, the evidence assembled in this chapter makes that hard.

Well of course it is possible for this time to be different. But, um, why can’t these three statements have been true for centuries? It will eventually be possible to automate tasks, and we have been slowly but exponentially “approaching” that future point for centuries. And so we may still have centuries to go. As I recently explained, exponential tech growth is consistent with a relatively constant rate at which jobs are displaced by automation.

Chace makes a specific claim that seems to me quite wrong.

Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. .. Facebook has declared its ambition to make Hinton’s prediction come true. To this end, it established a basic research unit in 2013 called Facebook Artificial Intelligence Research (FAIR) with 50 employees, separate from the 100 people in its Applied Machine Learning team. So within a decade, machines are likely to be better than humans at recognising faces and other images, better at understanding and responding to human speech, and may even be possessed of common sense. And they will be getting faster and cheaper all the time. It is hard to believe that this will not have a profound impact on the job market.

I’ll give 50-1 odds against full human level common sense AI with a decade! Chace, I offer my $5,000 against your $100. Also happy to bet on “profound” job market impact, as I mentioned in my review of Ford. Chace, to his credit, sees value in such bets:

The economist Robin Hanson thinks that machines will eventually render most humans unemployed, but that it will not happen for many decades, probably centuries. Despite this scepticism, he proposes an interesting way to watch out for the eventuality: prediction markets. People make their best estimates when they have some skin in the forecasting game. Offering people the opportunity to bet real money on when they see their own jobs or other peoples’ jobs being automated may be an effective way to improve our forecasting.

Finally, Chace repeats Ford’s error in claiming economic collapse if median wages fall:

But as more and more people become unemployed, the consequent fall in demand will overtake the price reductions enabled by greater efficiency. Economic contraction is pretty much inevitable, and it will get so serious that something will have to be done. .. A modern developed society is not sustainable if a majority of its citizens are on the bread line.

Really, an economy can do fine if average demand is high and growing, even if median demand falls. It might be ethically lamentable, and the political system may have problems, but markets can do just fine.

GD Star Rating
Tagged as: ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
Tagged as: , ,

Oarsman Pay Parable

Imagine an ancient oarsman, rowing in a galley boat. Rowing takes effort, and risks personal injury, so all else equal an oarsman would rather not row, or row only weakly. How can his boss induce effort?

One simple approach is to offer a very direct and immediate incentive. Use slaves as rowers, and have a boss watch them, whipping any who aren’t rowing as hard as sustainably possible. This actually didn’t happen much in the ancient world; galley slaves weren’t common until the 1500s. But the idea is simple. And of course the same system could also work with cash; usually make positive payments for work, but sometimes fine those you discover aren’t working hard enough. Of course the boss can’t watch everyone all the time. But with a big enough penalty when caught, it might work.

Now imagine that the boss can’t watch each individual oarsman, but can only see the overall speed of the ship. Now the entire crew must be punished together, all or none of them. The boss might try to improve the situation by empowering oarsmen to punish each other for not rowing hard enough, and that might help, but rowers would also use that power for other ends, creating costs.

An even worse case is where the boss can only see how long it takes for the boat to reach its destination. Here the boss might reward the crew for a short trip, and punish them for a long one, but a great many other random factors will influence the length of the trip. Why bother to work hard, if it makes little difference to your chance of reward or punishment?

There is a general principle here. As we add more noise to the measurement of relevant outcomes visible to the ultimate boss, the harder it is to use incentives tied to such outcomes to incentivize rowers. This is true regardless of the type of incentives used. Yes, the lower the worst outcome, and the higher the best outcome, that the boss can impose, the stronger incentives can be. But even the strongest possible incentives can fail when noise is high.

Yes, one can create layers of bosses, with the lowest bosses able to see specifics best. But it can be hard to give lower bosses good incentives, if higher bosses can’t see well.

Another problem is if the boss doesn’t know just how hard each oarsman is capable of rowing. In this case most oarsmen get some slack, so that they aren’t punished for not doing more than they can. This is just one example of an “information rent”. In general, such rents come from any work-relevant info that the worker has that the boss can’t see. If rowers need to synchronize their actions with each other or with waves or wind or time of day. If a ship captain needs to choose the ship’s route based info on weather and pirates. If a captain needs to treat different cargo differently in different conditions. If a captain need to make judgements about whether to wait longer in port for more cargo.

In general, when you want a worker to see some local condition, and then take an action that depends on that condition, you must pay some extra rent. So the more relevant info that workers get, the more choices they make, and the more that rides on those choices, the more workers gain in info rents.

A related issue is the scope for sabotage. Angry resentful workers can seek hidden ways to hurt their bosses and ventures. So the more hard-to-detect ways workers have to hurt things, the more bosses want to treat them well enough to avoid anger and resentment. Pained, sullen, or depressed workers can also hurt the mood of co-workers, suppliers, customers, and investors whom they contact. And the threat of pain can stress workers, making it harder for them to think clearly and well. These issues tend to argue against often using beatings and pain for motivation, even if such things allow stronger incentives by expanding the range of possible outcomes.

Overall, these issues are bigger for more “complex” work, i.e., for more cognitive work, work that adapts more to diverse and new local conditions, and work in larger organizations. In the modern world, jobs have been getting more complex in these ways, and the organization and work literature I’ve read suggests that finding good work incentives is a central problem in modern organizations, and that more complex work is a big reason why modern workplaces substitute broad incentives and good treatment for the detailed and harsh rules and monitoring more common in past eras.

The literature I’ve read on the economics of slavery also uses job complexity to explain the severity of treatment of slaves. Slaves in artisan jobs, in cities, and in households were treated better than field slaves, arguably because of job complexity. They were beaten less, and paid more, and might eventually buy their own freedom.

Bryan Caplan has argued that ems would be treated harshly as slaves: Continue reading "Oarsman Pay Parable" »

GD Star Rating
Tagged as: , ,

Caplan Debate Status

In this post I summarize my recent disagreement with Bryan Caplan. In the next post, I’ll dive into details of what I see as the key issue.

I recently said:

If you imagine religions, governments, and criminals not getting too far out of control, and a basically capitalist world, then your main future fears are probably going to be about for-profit firms, especially regarding how they treat workers. You’ll fear firms enslaving workers, or drugging them into submission, or just tricking them with ideology.

Because of this, I’m not so surprised by the deep terror many non-economists hold of future competition. For example, Scott Alexander (see also his review):

I agree with Robin Hanson. This is the dream time .. where we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish. As technological advance increases, .. new opportunities to throw values under the bus for increased competitiveness will arise. .. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming something much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

But I was honestly surprised to see my libertarian economist colleague Bryan Caplan also holding a similarly dark view of competition. As you may recall, Caplan had many complaints about my language and emphasis in my book, but in terms of the key evaluation criteria that I care about, namely how well I applied standard academic consensus to my scenario assumptions, he had three main points.

First, he called my estimate of an em economic growth doubling time of one month my “single craziest claim.” He seems to agree that standard economic growth models can predict far faster growth when substitutes for human labor can be made in factories, and that we have twice before seen economic growth rates jump by more than a factor of fifty in a less than previous doubling time. Even so, he can’t see economic growth rates even doubling, because of “bottlenecks”:

Politically, something as simple as zoning could do the trick. .. the most favorable political environments on earth still have plenty of regulatory hurdles .. we should expect bottlenecks for key natural resources, location, and so on. .. Personally, I’d be amazed if an em economy doubled the global economy’s annual growth rate.

His other two points are that competition would lead to ems being very docile slaves. I responded that slavery has been rare in history, and that docility and slavery aren’t especially productive today. But he called the example of Soviet nuclear scientists “powerful” even though “Soviet and Nazi slaves’ productivity was normally low.” He rejected the relevance of our large literatures on productivity correlates and how to motive workers, as little of that explicitly includes slaves. He concluded:

If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

As I didn’t think the docility of ems mattered that much for most of my book, I challenged him to audit five random pages. He reported “Robin’s only 80% wrong”, though I count only 63% from his particulars, and half of those come from his seeing ems as very literally “robot-like”. For example, he says ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Caplan offered no citations with specific support for these claims, instead pointing me to the literature on the economics of slavery. So I took the time to read up on that and posted a 1600 summary, concluding:

I still can’t find a rationale for Bryan Caplan’s claim that all ems would be fully slaves. .. even less .. that they would be so docile and “robot-like” as to not even have human-like personalities.

Yesterday, he briefly “clarified” his reasoning. He says ems would start out as slaves since few humans see them as having moral value:

1. Most human beings wouldn’t see ems as “human,” so neither would their legal systems. .. 2. At the dawn of the Age of Em, humans will initially control (a) which brains they copy, and (b) the circumstances into which these copies emerge. In the absence of moral or legal barriers, pure self-interest will guide creators’ choices – and slavery will be an available option.

Now I’ve repeatedly pointed out that the first scans would be destructive, so either the first scanned humans see ems as “human” and expect to not be treated badly, or they are killed against their will. But I want to focus instead on the core issue: like Scott Alexander and many others, Caplan sees a robust tendency of future competition to devolve into hell, held at bay only by contingent circumstances such as strong moral feelings. Today the very limited supply of substitutes for human workers keeps wages high, but if that supply were to greatly increase then Caplan expects that without strong moral resistance capitalist competition eventually turns everyone into docile inhuman slaves, because that arrangment robustly wins productivity competitions.

In my next post I’ll address that productivity issue.

GD Star Rating
Tagged as: , ,

World Basic Income

Joseph said .. Let Pharaoh .. appoint officers over the land, and take up the fifth part of the land of Egypt in the seven plenteous years. .. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land perish not through the famine. And the thing was good in the eyes of Pharaoh. (Genesis 38)

[Medieval Europe] public authorities were doubly interested in the problem of food supplies; first, for humanitarian reasons and for good administration; second, for reasons of political stability because hunger was the most frequent cause of popular revolts and insurrections. In 1549 the Venetian officer Bernardo Navagero wrote to the Venetian senate: “I do not esteem that there is anything more important to the government of cities than this, namely the stocking of grains, because fortresses cannot be held if there are not victuals and because most revolts and seditions originate from hunger. (p42, Cipolla, Before the Industrial Revolution)

63% of Americans don’t have enough saved to cover even a $500 financial setback. (more)

Even in traditional societies with small governments, protecting citizens from starvation was considered a proper of role of the state. Both to improve welfare, and to prevent revolt. Today it could be more efficient if people used modern insurance institutions to protect themselves. But I can see many failing to do that, and so can see governments trying to insure their citizens against big disasters.

Of course rich nations today face little risk of famine. But as I discuss in my book, eventually when human level artificial intelligence (HLAI) can do almost all tasks cheaper, biological humans will lose pretty much all their jobs, and be forced to retire. While collectively humans will start out owning almost all the robot economy, and thus get rich fast, many individuals may own so little as to be at risk of starving, if not for individual or collective charity.

Yes, this sort of transition is a long way off; “this time isn’t different” yet. There may be centuries still to go. And if we first achieve HLAI via the relatively steady accumulation of better software, as we have been doing for seventy years, we may get plenty of warning about such a transition. However, if we instead first achieve HLAI via ems, as elaborated in my book, we may get much less warning; only five years might elapse between seeing visible effects and all jobs lost. Given how slowly our political systems typically changes state redistribution and insurance arrangements, it might be wiser to just set up a system far in advance that could deal with such problems if and when they appear. (A system also flexible enough to last over this long time scale.)

The ideal solution is global insurance. Buy insurance for citizens that pays off only when most biological humans lose their jobs, and have this insurance pay enough so these people don’t starve. Pay premiums well in advance, and use a stable insurance supplier with sufficient reinsurance. Don’t trust local assets to be sufficient to support local self-insurance; the economic gains from an HLAI economy may be very concentrated in a few dense cities of unknown locations.

Alas, political systems are even worse at preparing for problems that seem unlikely anytime soon. Which raises the question: should those who want to push for state HLAI insurance ally with folks focused on other issues? And that brings us to “universal basic income” (UBI), a topic in the news lately, and about which many have asked me in relation to my book.

Yes, there are many difficult issues with UBI, such as how strongly the public would favor it relative to traditional poverty programs, whether it would replace or add onto those other programs, and if replacing how much that could cut administrative costs and reduce poverty targeting. But in this post, I want to focus on how UBI might help to insure against job loss from relatively sudden unexpected HLAI.

Imagine a small “demonstration level” UBI, just big enough to one side to say “okay we started a UBI, now it is your turn to lower other poverty programs, before we raise UBI more.” Even such a small UBI might be enough to deal with HLAI, if its basic income level were tied to the average income level. After all, an HLAI economy could grow very fast, allowing very fast growth in the incomes that biological human gain from owning most of the capital in this new economy. Soon only a small fraction of that income could cover a low but starvation-averting UBI.

For example, a UBI set to x% of average income can be funded via a less than x% tax on all income over this UBI level. Since average US income per person is now $50K, a 10% version gives a UBI of $5K. While this might not let one live in an expensive city, a year ago I visited a 90-adult rural Virginia commune where this was actually their average income. Once freed from regulations, we might see more innovations like this in how to spend UBI.

However, I do see one big problem. Most UBI proposals are funded out of local general tax revenue, while the income of a HLAI economy might be quite unevenly distributed around the globe. The smaller the political unit considering a UBI, the worse this problem gets. Better insurance would come from a UBI that is funded out of a diversified global investment portfolio. But that isn’t usually how governments fund things. What to do?

A solution that occurs to me is to push for a World Basic Income (WBI). That is, try to create and grow a coalition of nations that implement a common basic income level, supported by a shared set of assets and contributions. I’m not sure how to set up the details, but citizens in any of these nations should get the same untaxed basic income, even if they face differing taxes on incomes above this level. And this alliance of nations would commit somehow to sharing some pool of assets and revenue to pay for this common basic income, so that everyone could expect to continue to receive their WBI even after an uneven disruptive HLAI revolution.

Yes, richer member nations of this alliance could achieve less local poverty reduction, as the shared WBI level couldn’t be above what the poor member nations could afford. But a common basic income should make it easier to let citizens move within this set of nations. You’d less have to worry about poor folks moving to your nation to take advantage of your poverty programs. And the more that poverty reduction were implemented via WBI, the bigger would be this advantage.

Yes, this seems a tall order, probably too tall. Probably nations won’t prepare, and will then respond to a HLAI transition slowly, and only with what ever resources they have at their disposal, which in some places will be too little. Which is why I recommend that individuals and smaller groups try to arrange their own assets, insurance, and sharing. Yes, it won’t be needed for a while, but if you wait until the signs of something big soon are clear, it might then be too late.

GD Star Rating
Tagged as: , ,