Tag Archives: AI

Reply to Christiano on AI Risk

Paul Christiano was one of those who encouraged me to respond to non-foom AI risk concerns. Here I respond to two of his posts he directed me to. The first one says we should worry about the following scenario:

Imagine using [reinforcement learning] to implement a decentralized autonomous organization (DAO) which maximizes its profit. .. to outcompete human organizations at a wide range of tasks — producing and selling cheaper widgets, but also influencing government policy, extorting/manipulating other actors, and so on.

The shareholders of such a DAO may be able to capture the value it creates as long as they are able to retain effective control over its computing hardware / reward signal. Similarly, as long as such DAOs are weak enough to be effectively governed by existing laws and institutions, they are likely to benefit humanity even if they reinvest all of their profits.

But as AI improves, these DAOs would become much more powerful than their human owners or law enforcement. And we have no ready way to use a prosaic AGI to actually represent the shareholder’s interests, or to govern a world dominated by superhuman DAOs. In general, we have no way to use RL to actually interpret and implement human wishes, rather than to optimize some concrete and easily-calculated reward signal. I feel pessimistic about human prospects in such a world. (more)

In a typical non-foom world, if one DAO has advanced abilities, then most other organizations, including government and the law, have similar abilities. So such DAOs shouldn’t find it much easier to evade contracts or regulation than do organizations today. Thus humans can be okay if law and government still respect human property rights or political representation. Sure it might be hard to trust such a DAO to manage your charity, if you don’t trust it to judge who is in most need. But you might trust it much to give you financial returns on your financial investments in it.

Paul Christiano’s second post suggests that the arrival of AI arrives will forever lock in the distribution of patient values at that time:

The distribution of wealth in the world 1000 years ago appears to have had a relatively small effect—or more precisely an unpredictable effect, whose expected value was small ex ante—on the world of today. I think there is a good chance that AI will fundamentally change this dynamic, and that the distribution of resources shortly after the arrival of human-level AI may have very long-lasting consequences. ..

Whichever values were most influential at one time would remain most influential (in expectation) across all future times. .. The great majority of resources are held by extremely patient values. .. The development of machine intelligence may move the world much closer to this naïve model. .. [Because] the values of machine intelligences can (probably, eventually) be directly determined by their owners or predecessors. .. it may simply be possible to design a machine intelligence who exactly shares their predecessor’s values and who can serve as a manager. .. the arrival of machine intelligence may lead to a substantial crystallization of influence .. an event with long-lasting consequences. (more)

That is, Christiano says future AI won’t have problems preserving its values over time, nor need it pay agency costs to manage subsystems. Relatedly, Christiano elsewhere claims that future AI systems won’t have problems with design entrenchment:

Over the next 100 years greatly exceeds total output over all of history. I agree that coordination is hard, but even spending a small fraction of current effort on exploring novel redesigns would be enough to quickly catch up with stuff designed in the past.

A related claim, that Christiano supports to some degree, is that future AI are smart enough to avoid suffers from coordination failures. They may even use “acasual trade” to coordinate when physical interaction of any sort is impossible!

In our world, more competent social and technical systems tend to be larger and more complex, and such systems tend to suffer more (in % cost terms) from issues of design entrenchment, coordination failures, agency costs, and preserving values over time. In larger complex systems, it becomes harder to isolate small parts that encode “values”; a great many diverse parts end up influencing what such systems do in any given situation.

Yet Christiano expects the opposite for future AI; why? I fear his expectations result more from far view idealizations than from observed trends in real systems. In general, we see things far away in less detail, and draw inferences about them more from top level features and analogies than from internal detail. Yet even though we know less about such things, we are more confident in our inferences! The claims above seem to follow from the simple abstract description that future AI is “very smart”, and thus better in every imaginable way. This is reminiscent of medieval analysis that drew so many conclusions about God (including his existence) from the “fact” that he is “perfect.”

But even if values will lock in when AI arrives, and then stay locked, that still doesn’t justify great efforts to study AI control today, at least relative to the other options of improving our control mechanisms in general, or saving resources now to spend later, either on studying AI control problems when we know more about AI, or just to buy influence over the future when that comes up for sale.

GD Star Rating
loading...
Tagged as: , , ,

Tegmark’s Book of Foom

Max Tegmark says his new book, Life 3.0, is about what happens when life can design not just its software, as humans have done in Life 2.0, but also its hardware:

Life 1.0 (biological stage) evolves its hardware and software
Life 2.0 (cultural stage) evolves its hardware, designs much of its software
Life 3.0 (technological stage): designs its hardware and software ..
Many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us? That’s the topic of this book. (29-30)

Actually, its not. The book says little about redesigning hardware. While it says interesting things on many topics, its core is on a future “singularity” where AI systems quickly redesign their own software. (A scenario sometimes called “foom”.)

The book starts out with a 19 page fictional “scenario where humans use superintelligence to take over the world.” A small team, apparently seen as unthreatening by the world, somehow knows how to “launch” a “recursive self-improvement” in a system focused on “one particular task: programming AI Systems.” While initially “subhuman”, within five hours it redesigns its software four times and becomes superhuman at its core task, and so “could also teach itself all other humans skills.”

After five more hours and redesigns it can make money by doing half of the tasks at Amazon Mechanical Turk acceptably well. And it does this without having access to vast amounts of hardware or to large datasets of previous performance on such tasks. Within three days it can read and write like humans, and create world class animated movies to make more money. Over the next few months it goes on to take over the news media, education, world opinion, and then the world. It could have taken over much faster, except that its human controllers were careful to maintain control. During this time, no other team on Earth is remotely close to being able to do this.

Tegmark later explains: Continue reading "Tegmark’s Book of Foom" »

GD Star Rating
loading...
Tagged as: , ,

Can Human-Like Software Win?

Many, perhaps most, think it obvious that computer-like systems will eventually be more productive than human-like systems in most all jobs. So they focus on how humans might maintain control, even after this transition. But this eventuality is less obvious than it seems, depending on what exactly one means by “human-like” or “computer-like” systems. Let me explain.

Today the software that sits in human brains is stuck in human brain hardware, while the other kinds of software that we write (or train) sit in the artificial hardware that we make. And this artificial hardware has been improving rapidly far more rapidly than has human brain hardware. Partly as a result of this, systems of artificial software and hardware have been improving rapidly compared to human brain systems.

But eventually we will find a way to transfer the software from human brains into artificial hardware. Ems are one way to do this, as a relatively direct port. But other transfer mechanics may be developed.

Once human brain software is in the same sort of artificial computing hardware as all the other software, then the relative productivity of different software categories comes down to a question of quality: which categories of software tend to be more productive on which tasks?

Of course there will many different variations available within each category, to match to different problems. And the overall productivity of each category will depend both on previous efforts to develop and improve software in that category, and also on previous investments in other systems to match and complement that software. For example, familiar artificial software will gain because we have spent longer working to match it to familiar artificial hardware, while human software will gain from being well matched to complex existing social systems, such as language, firms, law, and government.

People give many arguments for why they expect human-like software to mostly lose this future competition, even when it has access to the same hardware. For example, they say that other software could lack human biases and also scale better, have more reliable memory, communicate better over wider scopes, be easier to understand, have easier meta-control and self-modification, and be based more directly on formal abstract theories of learning, decision, computation, and organization.

Now consider two informal polls I recently gave my twitter followers:

Surprisingly, at least to me, the main reason that people expect human-like software to lose is that they mostly expect whole new categories of software to appear, categories quite different from both the software in the human brain and also all the many kinds of software with which we are now familiar. If it comes down to a contest between human-like and familiar software categories, only a quarter of them expect human-like to lose big.

The reason I find this surprising is that all of the reasons that I’ve seen given for why human-like software could be at a disadvantage seem to apply just as well to familiar categories of software. In addition, a new category must start with the disadvantages of having less previous investment in that category and in matching other systems to it. That is, none of these are reasons to expect imagined new categories of software to beat familiar artificial software, and yet people offer them as reasons to think whole new much more powerful categories will appear and win.

I conclude that people don’t mostly use specific reasons to conclude that human-like software will lose, once it can be moved to artificial hardware. Instead they just have a general belief that the space of possible software is huge and contains many new categories to discover. This just seems to be the generic belief that competition and innovation will eventually produce a lot of change. Its not that human-like software has any overall competitive disadvantage compared to concrete known competitors; it is at least as likely to have winning descendants as any such competitors. Its just that our descendants are likely to change a lot as they evolve over time. Which seems to me a very different story than the humans-are-sure-to-lose story we usually hear.

GD Star Rating
loading...
Tagged as: , ,

Foom Justifies AI Risk Efforts Now

Years ago I was honored to share this blog with Eliezer Yudkowsky. One of his main topics then was AI Risk; he was one of the few people talking about it back then. We debated this topic here, and while we disagreed I felt we made progress in understanding each other and exploring the issues. I assigned a much lower probability than he to his key “foom” scenario.

Recently AI risk has become something of an industry, with far more going on than I can keep track of. Many call working on it one of the most effectively altruistic things one can possibly do. But I’ve searched a bit and as far as I can tell that foom scenario is still the main reason for society to be concerned about AI risk now. Yet there is almost no recent discussion evaluating its likelihood, and certainly nothing that goes into as much depth as did Eliezer and I. Even Bostrom’s book length treatment basically just assumes the scenario. Many seem to think it obvious that if one group lets one AI get out of control, the whole world is at risk. It’s not (obvious).

As I just revisited the topic while revising Age of Em for paperback, let me try to summarize part of my position again here. Continue reading "Foom Justifies AI Risk Efforts Now" »

GD Star Rating
loading...
Tagged as: , ,

Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
loading...
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Idea Talkers Clump

I keep encountering people who are mad at me, indignant even, for studying the wrong scenario. While my book assumes that brain emulations are the first kind of broad human-level AI, they expect more familiar AI, based on explicitly-coded algorithms, to be first.

Now the prospect of human-level ordinary AI is definitely what more people are talking about today – the topic is in fashion. There are AI companies, demos, conferences, media articles, and more serious intellectual discussion. In fact, I’d estimate that there is now at least one hundred times as much attention given to the scenario of human level AI based on explicit coding (including machine learning code) than to brain emulations.

But I very much doubt that ordinary AI first is over one hundred times as probable as em-based AI first. In fact, I’ll happily take bets at a factor of ten. You pay me $1000 if em-AI comes first, and I pay you $100 if other AI comes first.

In addition, due to diminishing returns, intellectual attention to future scenarios should probably be spread out more evenly than are probabilities. The first efforts to study each scenario can pick the low hanging fruit to make faster progress. In contrast, after many have worked on a scenario for a while there is less value to be gained from the next marginal effort on that scenario.

Yes, sometimes there can be scale economies to work on a topic; enough people need to do enough work to pass a critical threshold of productivity. But I see little evidence of that here, and much evidence to the contrary. Even within the scope of working on my book I saw sharply diminishing returns to continued efforts. So even if em-based AI had only 1% the chance of the other scenario, we’d want much more than 1% of thinkers to study it. At least we would if our goal were better understanding.

But of course that is not usually the main goal of individual thinkers. We are more eager to jump on bandwagons than to follow roads less traveled. All those fellow travelers validate us and our judgement. We prefer to join and defend a big tribe against outsiders, especially smaller weaker outsiders.

So instead of praising my attention to a neglected if less-likely topic, those who think em-AI less likely mostly criticize me for studying the wrong scenario. And continue to define topics of articles, conferences, special journal issues, etc. to exclude em-AI scenarios.

And this is how it tends to work in general in the world of ideas. Idea talkers tend to clump onto the topics that others have discussed lately, leaving topics outside the fashionable clumps with less attention relative to their importance. So if you are a thinker with the slack and independence to choose your own topics, an easy way to make disproportionate intellectual progress is to focus on neglected topics.

Of course most intellectuals already know this, and choose otherwise.

Added:  Never mind about effort less proportional than chances; Owen Cotton-Barratt reminded me that if value diminishes with log of effort, optimal scenario effort is proportional to probability.

Added 11Oct: Anders Sandberg weighs in.

GD Star Rating
loading...
Tagged as: , ,

No Third AI Way

A few days ago in the Post:

Bryan Johnson .. wants to .. find a way to supercharge the human brain so that we can keep up with the machines. .. His science-fiction-meets-science start-up, Kernel, is building a tiny chip that can be implanted in the brain. .. Top neuroscientists who are building the chip .. hope that in the longer term, it will be able to boost intelligence, memory and other cognitive tasks. .. In an age of AI, he insists that boosting the capacity of our brains is itself an urgent public concern.

In a MeaningOfLife.tv video discussion between James Hughes and I just posted today, Hughes said:

One of the reasons why I’m skeptical about the [em] scenario that you’ve outlined, is that I see a scenario where brains extending themselves though AI and computing tools basically slaved to the core personal identity of meat brains is a more likely scenario than one where we happily acknowledge the rights and autonomy of virtual persons. .. We need to have the kind of AI in our brain which is not just humans 1.0 that get shuffled off to the farm while the actual virtual workers do all the work, as you have imagined.

Many hope for a “third way” alternative to both ems and more standard AI software taking all the jobs. They hope that instead “we” can keep our jobs via new chips “in” or closely integrated with our brain. This seems to me mostly a false hope.

Yes of course if we have a strong enough global political coordination we could stake out a set of officially human jobs and forbid machines from doing them, no matter how much better machines might be at them. But if we don’t have such strong coordination, then the key question is whether there is an important set of jobs or tasks where ordinary human brains are more productive than artificial hardware. Having that hardware be located in server racks in distant data centers, versus in chips implanted in human brains, seems mostly irrelevant to this.

If artificial hardware can be similarly effective at such tasks, then it can have enormous economic advantages relative to human brains. Even today, the quantity of artificial hardware can be increased very rapidly in factories. And eventually, artificial hardware can be run at much faster speeds, with using much less energy. Humans, in contrast, grow very slowly, have limited brain speeds, and are fragile and expensive. It is very hard to see humans outcompeting artificial hardware at such tasks unless the artificial hardware is just very bad at such tasks. That is in fact the case today, but it would not at all be the case with ems, nor with other AI with similar general mental abilities.

GD Star Rating
loading...
Tagged as: ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
loading...
Tagged as: , ,

AI As Software Grant

While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)

Who is Open Philanthropy? From their summary:

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

A key paragraph from my proposal:

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.

The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.

So right from the start I’d like to offer this challenge:

Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.

I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.

GD Star Rating
loading...
Tagged as: , , ,