Talks Not About Info

You can often learn about your own world by first understanding some other world, and then asking if your world is more like that other world than you had realized. For example, I just attended WorldCon, the top annual science fiction convention, and patterns that I saw there more clearly also seem echoed in wider worlds.

At WorldCon, most of the speakers are science fiction authors, and the modal emotional tone of the audience is one of reverence. Attendees love science fiction, revere its authors, and seek excuses to rub elbows with them. But instead of just having social mixers, authors give speeches and sit on panels where they opine on many topics. When they opine on how to write science fiction, they are of course experts, but in fact they mostly prefer to opine on other topics. By presenting themselves as experts on a great many future, technical, cultural, and social topics, they help preserve the illusion that readers aren’t just reading science fiction for fun; they are also part of important larger conversations.

When science fiction books overlap with topics in space, physics, medicine, biology, or computer science, their authors often read up on those topics, and so can be substantially more informed than typical audience members. And on such topics actual experts will often be included on the agenda. Audiences may even be asked if any of them happen to have expertise on a such a topic.

But the more that a topic leans social, and has moral or political associations, the less inclined authors are to read expert literatures on that topic, and the more they tend to just wing it and think for themselves, often on their feet. They less often add experts to the panel or seek experts in the audience. And relatively neutral analysis tends to be displaced by position taking – they find excuses to signal their social and political affiliations.

The general pattern here is: an audience has big reasons to affiliate with speakers, but prefers to pretend those speakers are experts on something, and they are just listening to learn about that thing. This is especially true on social topics. The illusion is exposed by facts like speakers not being chosen for knowing the most about a subject discussed, and those speakers not doing much homework. But enough audience members are ignorant of these facts to provide a sufficient fig leaf of cover to the others.

This same general pattern repeats all through the world of conferences and speeches. We tend to listen to talks and panels full of not just authors, but also generals, judges, politicians, CEOs, rich folks, athletes, and actors. Even when those are not the best informed, or even the most entertaining, speakers on a topic. And academic outlets tend to publish articles and books more for being impressive than for being informative. However, enough people are ignorant of these facts to let audiences pretend that they mainly listen to learn and get information, rather than to affiliate with the statusful.

Added 22Aug: We feel more strongly connected to people when we together visibly affirm our shared norms/values/morals. Which explains why speakers look for excuses to take positions.

GD Star Rating
loading...
Tagged as: ,

Alas, Unequal Love

We each feel a deep strong need to love others, and to be loved by others. (Self-love doesn’t satisfy these needs.) You might think we could pair up and all be very satisfied. But this doesn’t happen for two main reasons:

  1. We each prefer to love the popular, whom more others also love. So a few get lots of love, while the rest get less.
  2. We can more easily love imaginary fictional people than real people. Especially ones that more others love.

So even if you are my best source for getting love, the love I get from you may be far less than the love you are giving out, or than I’m giving out. And a few exceptional people (many of them imaginary) get far more love than most people need or can enjoy.

This seems an essential tragedy of the human condition. You might claim that love isn’t a limited resource, that the more people each of us love, the more love we each have to give out. So there is no conflict between loving popular and imaginary people and loving the rest of us. But while this might be true at some low scales of how many people we love, at the actual scales of love this just doesn’t seem right to me. Love instead seems scarce at the margin.

Can we do anything about this problem? Well one obvious fact is that we don’t love people we’ve never heard of. And we can control many things about who we hear of. So we could in principle arrange who we hear about, in order to get love spread out more evenly. But we don’t do this, nor do we seem much inclined to do anything like this. We instead all devote a great deal of time and effort to hearing about as many popular and fictional people as possible. And to trying to be as popular as we can.

I don’t have great ideas for how to solve this. But I am convinced it is one of our essential problems, and it is far from obvious that we’ve given it all the careful thought we might. Please, someone thoughtful and clever, figure out how we might all be much loved.

GD Star Rating
loading...
Tagged as: , ,

Change Favors The Robust, Not The Radical

There are futurists who like to think about the non-immediate future, and there are radicals who advocate for unusual policies, such as on work, diet, romance, governance, etc. And the intersection between these groups is larger than you might have expected by chance; futurists tend to be radicals and radicals tend to be futurists. This applies to me, in that I’ve both proposed a radical futarchy, and have a book on future ems.

The usual policies that we adopt in our usual world have a usual set of arguments in their favor, arguments usually tied to the details of our usual world. So those who want to argue instead for radical policies must both argue against the usual pro-arguments, and then also offer a new set of arguments in favor of their radical alternatives, arguments also tied to the details of our world. This can seem like a heavy burden.

So many who favor radical policies prefer to switch contexts and reject the relevance of the usual details of our world. By invoking a future where many things change, they feel they can just dismiss the usual arguments for the usual policies based on the usual details of our world. And at this point they usually rest, feeling their work is done. They like being in a situation where, even if they can’t argue very strongly for their radical policies, others also can’t argue very strongly against such policies. Intellectual stalemate can seem a big step up from the usual radical’s situation of being at a big argumentative disadvantage.

But while this may help to win (or at least not lose) argument games, it should not actually make us favor radical policies more. It should instead shift our attention to robust arguments, ones can apply over a wide range of possibilities. We need to hear positive arguments for why we should expect radical policies to work well robustly across a wide range of possible futures, relative to our status quo policies.

In my recent video discussion with James Hughes, he criticized me for assuming that many familiar elements of our world, such as property, markets, inequality, sexuality, and individual identities, continue into an em age. He instead foresaw an enormous hard-to-delimit range of possibilities. But then he seemed to think this favored his radical solution of a high-regulation high-redistribution strong global socialist government which greatly limits and keeps firm control over autonomous artificial intelligences. Yet he didn’t offer arguments for why this is a robust solution that we should expect to work well in a very wide variety of situations.

It seems to me that if we are going to focus on the axis of decentralized markets vs. more centralized and structured organizations, it is markets that have proven themselves to be the more robust mechanism, working reasonably well in a very wide range of situations. It is structured organizations that are more fragile, and fail more quickly as situations change. Firms go out of business often when their organizations fail to keep up with changing environments; decentralized markets disappearing because they fail to serve participants happens far less often.

GD Star Rating
loading...
Tagged as: ,

No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a MeaningOfLife.tv video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
loading...
Tagged as: ,

Power Corrupts, Slavery Edition

I’ve just finished reading a 1980 book Advice Among Masters: The Ideal in Slave Management in the Old South, which mostly quotes US slave owners from the mid 1800s writing on how to manage slaves. I really like reading ordinary people describe their to-me-strange worlds in their own words, and hope to do more of it. (Suggestions?)

This book has made me rethink where the main harms from slavery may lie. I said before that slaves were most harmed during and soon after capture, and that high interest rates could induce owners to work slaves to an early death. But neither of these apply in the US South, where the main harm had seemed to me to be from using threats of pain to induce more work on simple jobs.

However, this book gives the impression that most threats of pain were not actually directed at making slaves work harder. Slaves did work long hours, but then so did most poor European workers around that time. Slave owners didn’t actually demand that much more work from those capable of more work, instead tending to demand similar hours and effort from all slaves of a similar age, gender, and health.

What seems instead to have caused more pain to US south slaves was the vast number of rules that owners imposed, most of which had little direct connection to key problems like shirking at work, stealing, or running away. Rules varied quite a bit from owner to owner, but there were rules on where and when one could travel, times to rise and sleep, who could marry and live with who, who could talk to who when, when and how to wash bodies and houses, what clothes to wear when, who can cook, who can eat what foods, who goes to what sorts of churches when, and so on. Typical rules for slaves had much in common with typical “upstanding behavior” rules widely imposed by parents on their children, and by schools and armies on students and soldiers: eat well, rise early, keep clean, say your prayers, don’t drink, stay nearby, talk respectfully, don’t fraternize with the wrong people, etc.

With so many rules that varied so much, a standard argument against letting slaves visit neighboring plantations was that they’d less accept local rules if they learned of more lenient rules nearby. And while some owners emphasized enforcing rules via scoldings, fines, or reduction of privileges, most often violations were punished with beatings.

Another big cause of pain seems to have been agency failures with overseers, i.e., those who directly managed the slaves on behalf of the slave owners. Owners of just a few slaves oversaw them directly, and many other owners insisted on personally approving any punishments. However still others gave full discretion to overseers and refused to listen to slave complaints.

Few overseers had a direct financial stake in farm profitability, and many owners understood that such stakes would tempt overseers, who changed jobs often, to overwork slaves in the short run at the expense of long run profitability. Even so, short run harvest gains were usually easier for owners to see than long run harm to slaves, tempting overseers to sacrifice the former for the latter. And even if most overseers were kept well in line, a small fraction who used their discretion to beat and rape could impose high levels of net harm.

US south slave plantations were quite literally small totalitarian governments, and the main harms to such slaves seems to parallel the main libertarian complaints about all governments. A libertarian perspective sees the following pattern: once one group is empowered to run the lives of others, they tend to over-confidently over-manage them, adding too many rules that vary too much, rules enforced with expensive punishments. And such governments tend to give their agents too much discretion, which such agents use too often to indulge personal whims and biases. Think abusive police and an excess prison population today. Such patterns might be explained by an unconscious human habit of dominance via paternalism; while dominant groups tend to justify their rules in terms of helping, they are actually more trying to display their dominance.

Now one might instead argue that the usual “good behavior” rules imposed by parents, schools, militaries, and slave owners are actually helpful on average, turning lazy good-for-nothings into upright citizens. And in practice formal rule systems are so limited that agent discretion is needed to actually get good results. And strong punishments are needed to make it work. Spare the rod, and spoil the child, conscript, or slave. From this perspective, US south slave must have led decent lives overall, and we should be glad that improving tech is making it easier for modern governments to get involved in more details of our lives.

Looking to the future, if totalitarian management of individual lives is actually efficient, a more competitive future world would see more of it, leading widely to effective if not official slavery. Mostly for our own good. (This fear was common early in the industrial revolution.) But if the libertarians are right, and most dominant groups tend to make too many overly-harsh rules at the expense of efficiency, then a more competitive future world would see less such paternalism, including fewer slave-like lives.

GD Star Rating
loading...
Tagged as: , ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
loading...
Tagged as: , ,

Chace on Age of Em

Soon after I reviewed Calum Chace’s book, he reviewed mine:

I can’t remember ever reading a book before which I liked so much, while disagreeing with so much in it. This partly because the author is such an amiable fellow. .. The writing style is direct, informal and engaging ..  And the book addresses an important subject: the future.

As we disagree on much, I’ll just jump in and start replying.

Robin’s insistence that AI is making only modest advances, and will generate nothing much of interest before uploading arrives, seems dogmatic.

Given two events, my estimating that one is more likely to happen first seems to me no more dogmatic than Chace estimating the opposite.

Because of this claim, he is highly critical of the view that technological unemployment will be widespread in the next few decades. Fair enough, he might be right, but obviously I doubt it. He is also rather dismissive of major changes in society being caused by virtual reality, augmented reality, the internet of things, 3D printing, self-driving cars, and all the other astonishing technologies being developed and introduced as we speak.

I don’t dismiss such changes; they are welcome, and some will happen and matter. I just don’t see them as sufficient reason to think “this time is different” regarding massive job loss; the past saw changes of similar magnitudes.

He seems to think that when the first ems are created, they will very quickly be perfect replications of the target human minds. It seems to me more likely that we will create a series of approximations of the target person.

The em era starts when ems are cheaper than humans for most jobs. Yes of course imperfect emulations come first, but they are far less useful on most jobs. Consider that humans under the influence of recreational drugs are really quite good emulations of normal humans, yet they are much less valuable on most jobs. So emulations need to be even better than that to be very useful.

The humans in this world are all happy to be retired, and have the ems create everything they need. I think the scenario of radical abundance is definitely achievable, but I don’t think it’s a slam dunk, and I would imagine much more interaction – good and bad – between ems and humans than Robin seems to expect.

I don’t understand what kinds of interaction Chace thinks I expect less than he does here.

A couple of smaller but important comments. Robin thinks ems will be intellectually superior to most humans, not least because they will be modelled on the best of us. He therefore thinks they will be religious. Apart from the US, always an exceptional country, the direction of travel in that regard is firmly in the other direction.

In the book I gave citations on religious behavior correlating with work productivity. If someone has contrary citations, I’m all ears.

And space travel. Robin argues that we will keep putting off trying to colonise the stars because whenever you send a ship out there, it would always be overtaken by a later, cheaper one which benefits from better technology. This ignores one of the main reasons for doing it: to improve our chances of survival by making sure all our eggs aren’t in the one basket that is this pale blue dot.

I didn’t say no one would go into space; I pointed out that high interest rates discourage all long term projects, all else equal, including space projects.

GD Star Rating
loading...

Regulating Self-Driving Cars

Warning: I’m sure there’s a literature on this, which I haven’t read. This post is instead based on a conversation with some folks who have read more of it. So I’m “shooting from the hip” here, as they say.

Like planes, boats, submarines, and other vehicles, self-driving cars can be used in several modes. The automation can be turned off. It can be turned on and advisory only. It can be driving, but with the human watching carefully and ready to take over at any time. Or it can be driving with the human not watching very carefully, so that the human would take a substantial delay before being able to take over. Or the human might not be capable of taking over at all; perhaps a remote driver would stand ready to take over via teleoperation.

While we might mostly trust vehicle owners or passengers to decide when to use which modes, existing practice suggest we won’t entirely trust them. Today, after a traffic accident, we let some parties sue others for damages. This can improves driver incentives to drive well. But we don’t trust this to fully correct incentives. So in addition, we regulate traffic. We don’t just suggest that you stop at a red light, keep in one lane, or stay below a speed limit. We require these things, and penalize detected violations. Similarly, we’ll probably want to regulate the choice of self-driving mode.

Consider a standard three-color traffic light. When the light is red, you are not allowed to go. When it is green you are allowed, but not required, to go; sometimes it is not safe to go even when a light is green. When the light is yellow, you are supposed to pay extra attention to a red light coming soon. We could similarly use a three color system as the basis of a three-mode system of regulating self-driving cars.

Imagine that inside each car is a very visible light, which regulators can set to be green, yellow or red. When your light is red you must drive your car yourself, even if you get advice from automation. When the light is yellow you can let the automation take over if you want, but you must watch carefully, ready to take over. When the light is green, you can usually ignore driving, such as by reading or sleeping, though you may watch or drive if you want.

(We might want a standard way to alert drivers when their color changed away from green. Of course we could imagine adding more colors, to distinguish more levels of attention and control. But a three level system seems a reasonable place to start.)

Under this system, the key regulatory choice is the choice of color. This choice could in principle be set different for each car at each moment. But early on the color would probably be set the same for all cars and drivers of a type, in a particular geographic area at a particular time. The color might come from in part a broadcasted signal, with the light perhaps defaulting to red if it can’t get a signal.

One can imagine a very bureaucratic system to set the color, with regulators sitting in a big room filled with monitors, like NASA mission control. That would probably be too conservative and fail to take local circumstances enough into account. Or one might imagine empowering fancy statistical or machine learning algorithms to make the choice. But most any algorithm would make a lot of mistakes, and the choice of algorithm might be politicized, leading to a poor choice.

Let me suggest using prediction markets for this choice. Regulators would have to choose a large set of situation buckets, such that the color must be the same for all situations in the same bucket. Then for each bucket we’d have three markets, estimating the accident rate conditional on a particular color. Assuming that drivers gain some direct benefit from paying less attention to driving, we’d set the color to green unless the expected difference between the green and yellow accident rate became high enough. Similarly for the choice between red and yellow.

Work on combinatorial prediction markets suggests that it is feasible to have billions or more such buckets at a time. We might use audit lotteries and only actually estimate accident rates for some small fraction of these buckets, using bets conditional on such auditing. But even with a much smaller number of buckets, our experience with prediction markets suggests that such a system would work better than either a bureaucratic or statistical system with a similar number of buckets.

Added 1p: My assumptions were influenced by the book Our Robots, Ourselves on the history of automation.

GD Star Rating
loading...
Tagged as: , ,

Economic Singularity Review

The Economic Singularity: Artificial intelligence and the death of capitalism .. This new book from best-selling AI writer Calum Chace argues that within a few decades, most humans will not be able to work for money.

A strong claim! This book mentions me by name 15 times, especially on my review of Martin Ford’s Rise of the Robots, wherein I complain that Ford’s main evidence for saying “this time is different” is all the impressive demos he’s seen lately. Even though this was the main reason given in each previous automation boom for saying “this time is different.” This seems to be Chace’s main evidence as well:

Faster computers, the availability of large data sets, and the persistence of pioneering researchers have finally rendered [deep learning] effective this decade, leading to “all the impressive computing demos” referred to by Robin Hanson in chapter 3.3, along with some early applications. But the major applications are still waiting in the wings, poised to take the stage. ..

It’s time to answer the question: is it really different this time? Will machine intelligence automate most human jobs within the next few decades, and leave a large minority of people – perhaps a majority – unable to gain paid employment? It seems to me that you have to accept that this proposition is at least possible if you admit the following three premises: 1. It is possible to automate the cognitive and manual tasks that we carry out to do our jobs. 2. Machine intelligence is approaching or overtaking our ability to ingest, process and pass on data presented in visual form and in natural language. 3. Machine intelligence is improving at an exponential rate. This rate may or may not slow a little in the coming years, but it will continue to be very fast. No doubt it is still possible to reject one or more of these premises, but for me, the evidence assembled in this chapter makes that hard.

Well of course it is possible for this time to be different. But, um, why can’t these three statements have been true for centuries? It will eventually be possible to automate tasks, and we have been slowly but exponentially “approaching” that future point for centuries. And so we may still have centuries to go. As I recently explained, exponential tech growth is consistent with a relatively constant rate at which jobs are displaced by automation.

Chace makes a specific claim that seems to me quite wrong.

Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. .. Facebook has declared its ambition to make Hinton’s prediction come true. To this end, it established a basic research unit in 2013 called Facebook Artificial Intelligence Research (FAIR) with 50 employees, separate from the 100 people in its Applied Machine Learning team. So within a decade, machines are likely to be better than humans at recognising faces and other images, better at understanding and responding to human speech, and may even be possessed of common sense. And they will be getting faster and cheaper all the time. It is hard to believe that this will not have a profound impact on the job market.

I’ll give 50-1 odds against full human level common sense AI with a decade! Chace, I offer my $5,000 against your $100. Also happy to bet on “profound” job market impact, as I mentioned in my review of Ford. Chace, to his credit, sees value in such bets:

The economist Robin Hanson thinks that machines will eventually render most humans unemployed, but that it will not happen for many decades, probably centuries. Despite this scepticism, he proposes an interesting way to watch out for the eventuality: prediction markets. People make their best estimates when they have some skin in the forecasting game. Offering people the opportunity to bet real money on when they see their own jobs or other peoples’ jobs being automated may be an effective way to improve our forecasting.

Finally, Chace repeats Ford’s error in claiming economic collapse if median wages fall:

But as more and more people become unemployed, the consequent fall in demand will overtake the price reductions enabled by greater efficiency. Economic contraction is pretty much inevitable, and it will get so serious that something will have to be done. .. A modern developed society is not sustainable if a majority of its citizens are on the bread line.

Really, an economy can do fine if average demand is high and growing, even if median demand falls. It might be ethically lamentable, and the political system may have problems, but markets can do just fine.

GD Star Rating
loading...
Tagged as: ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

Added 28July2016: Caplan grades my summary of his position. I’m mostly in the ballpark, but he elaborates a bit on why he thinks em slaves would be docile:

Docile slaves are more profitable than slaves with attitude, because owners don’t have to use resources to torture and scare them into compliance.  That’s why owners sent rebellious slaves to “breakers”: to transform rebellious slaves into docile slaves.  Sci-fi is full of stories about humans genetically engineered to be model slaves.  Whole brain emulation is a quicker route to a the same destination.  What’s the puzzle?

For docility to be such a huge priority, relative to other worker features, em rebellion must happen often and impose big frequent costs. Docility doesn’t seem to describe our most productive workers today well, nor does it seem well suited when you want workers to be creative, think carefully, take the initiative, or persuade and inspire others. Either way, either frequent costly rebellions or extreme docility, create big disadvantages of slaves relative to free workers, and so argues against most ems being slaves.

GD Star Rating
loading...
Tagged as: , ,