Tag Archives: AI

Foom Update

To extend our reach, we humans have built tools, machines, firms, and nations. And as these are powerful, we try to maintain control of them. But as efforts to control them usually depend on their details, we have usually waited to think about how to control them until we had concrete examples in front of us. In the year 1000, for example, there wasn’t much we could do to usefully think about how to control most things that have only appeared in the last two centuries, such as cars or international courts.

Someday we will have far more powerful computer tools, including “advanced artificial general intelligence” (AAGI), i.e., with capabilities even higher and broader than those of individual human brains today. And some people today spend substantial efforts today worrying about how we will control these future tools. Their most common argument for this unusual strategy is “foom”.

That is, they postulate a single future computer system, initially quite weak and fully controlled by its human sponsors, but capable of action in the world and with general values to drive such action. Then over a short time (days to weeks) this system dramatically improves (i.e., “fooms”) to become an AAGI far more capable even than the sum total of all then-current humans and computer systems. This happens via a process of self-reflection and self-modification, and this self-modification also produces large and unpredictable changes to its effective values. They seek to delay this event until they can find a way to prevent such dangerous “value drift”, and to persuade those who might initiate such an event to use that method.

I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario. But I haven’t written on it for many years now, so perhaps it is time for an update. Recently we have seen noteworthy progress in AI system demos (if not yet commercial application), and some have urged me to update my views as a result.

The recent systems have used relative simple architectures and basic algorithms to produce models with enormous numbers of parameters from very large datasets. Compared to prior systems, these systems have produced impressive performance on an impressively wide range of tasks. Even though they are still quite far from displacing humans in any substantial fraction of their current tasks.

For the purpose of reconsidering foom, however, the key things to notice are: (1) these systems have kept their values quite simple and very separate from the rest of the system, and (2) they have done basically zero self-reflection or self-improvement. As I see AAGI as still a long way off, the features of these recent systems can only offer weak evidence regarding the features of AAGI.

Even so, recent developments offer little support for the hypothesis that AAGI will be created soon via the process of self-reflection and self-improvement, or for the hypothesis that such a process risks large “value drifts”. These current ways that we are now moving toward AAGI just don’t look much like the foom scenario. And I don’t see them as saying much about whether ems or AAGI will appear first.

Again, I’m not saying foom is impossible, just that it looks unlikely, and that recent events haven’t made it seem moreso.

These new systems do suggest a substantial influence of architecture on system performance, though not obviously at a level out of line with that in most prior AI systems. And note that the abilities of the very best systems here are not that much better than that of the 2nd and 3rd best systems, arguing weakly against AAGI scenarios where the best system is vastly better.

GD Star Rating
loading...
Tagged as:

AI Language Progress

Brains first evolved to do concrete mental tasks, like chasing prey. Then language evolved, to let brains think together, such as on how to chase prey together. Words are how we share thoughts.

So we think a bit, say some words, they think a bit, they say some words, and so on. Each time we hear some words we update our mental model on their thoughts, which also updates us about the larger world. Then we think some more, drawing more conclusions about the world, and seek words that, when said, help them to draw similar conclusions. Along the way, mostly as a matter of habit, we judge each other’s ability to think and talk. Sometimes we explicit ask questions, or assign small tasks, which we expect to be especially diagnostic of relevant abilities in some area.

The degree to which such small task performance is diagnostic of abilities re the more human fundamental task of thinking together varies a lot. It depends, in part, on how much people are rewarded merely for passing those tests, and how much time and effort they can focus on learning to pass tests. We teachers are quite familiar with such “teaching to the test”, and it is often a big problem. There are many topics that we don’t teach much because we see that we just don’t have good small test tasks. And arguably schools actually fail most of the time; they arguably pretend to teach many things but mostly just rank students on general abilities to learn to pass tests, and inclinations to do what they are told. Abilities which can predict job performance.

Which brings us to the topic of recent progress in machine learning. Google just announced its PaLM system, which fit 540 billion parameters to a “high-quality corpus of 780 billion tokens that represent a wide range of natural language use cases”, in order to predict from past words the next words appropriate for a wide range of small language tasks. Its performance is impressive; it does well compared to humans on a wide range of such tasks. And yet it still basically “babbles“; it seems not remotely up to the task of thinking together with a human. If you talked with it for a long time, you might well find ways that it could help you. But still, it wouldn’t think with you.

Maybe this problem will be solved by just adding more parameters and data. But I doubt it. I expect that a bigger problem is that such systems have been training at these small language tasks, instead of at the more fundamental task of thinking together. Yes, most of the language data on which they are built is from conversations where humans were thinking together. So they can learn well to say the next small thing in such a conversation. But they seem to be failing to infer the deeper structures that support shared thinking among humans.

It might help to assign such a system the task of “useful inner monologue”. That is, it would start talking to itself, and keep talking indefinitely, continually updating its representations from the data of its internal monologue. The trick would be to generate these monologues and do this update so that the resulting system got better at doing other useful tasks. (I don’t know how to arrange this.) While versions of this approach have been tried before, the fact that this isn’t the usual approach suggests that it doesn’t now produce gains as fast, at least for doing these small language tasks. Even so, if those are misleading metrics, this approach might help more to get real progress at artificial thinking.

I will sit up and take notice when the main improvements to systems with impressive broad language abilities come from such inner monologues, or from thinking together on other useful tasks. That will look more like systems that have learned how to think. And when such abilities work across a wide scope of topics, that will look to me more like the proverbial “artificial general intelligence”. But I still don’t expect to see that for a long time. We see progress, but the road ahead is still quite long.

GD Star Rating
loading...
Tagged as:

Russell’s Human Compatible

My school turned back on its mail system as we start a new semester, and a few days ago out popped Stuart Russell’s book Human Compatible (published last Oct.), with a note inside dated March 31. Here’s my review, a bit late as a result.

Let me focus first on what I see as its core thesis, and then discuss less central claims.

Russell seems to say that we still have a lot of time, and that he’s only asking for a few people to look into the problem:

The arrival of super intelligence AI is inherently unpredictable. … My timeline of, say eighty years is considerably more conservative than that of the typical AI researcher. … If just one conceptual breakthrough were needed, …superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I’m, however, fairly confident that wee have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. (pp.77-78)

Scott Alexander … summed it up brilliantly: … The skeptic’s position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers,” meanwhile [take exactly the same position.] (pp.169-170)

Yet his ask is actually much larger: unless we all want to die, AI and related disciplines must soon adopt a huge and expensive change to their standard approach: we must stop optimizing using simple fixed objectives, like the way a GPS tries to minimize travel time, or a trading program tries to maximize profits. Instead we must make systems that attempt to look at all the data on what all humans have ever done to infer a complex continually-updated integrated representation of all human preferences (and meta-preferences) over everything, and use that complex representation to make all automated decisions. Modularity be damned: Continue reading "Russell’s Human Compatible" »

GD Star Rating
loading...
Tagged as: , ,

No Recent Automation Revolution

Unless you’ve been living under a rock, you know that for many years the media has been almost screaming that we entering a big automation revolution, with huge associated job losses, due to new AI tech, especially deep learning. The media has cited many “experts” making such claims, most every management consulting firm has felt compelled to issue a related report, and the subject came up in the Democratic US presidential debates.

Last December, Keller Scholl and I posted a working paper suggesting that this whole narrative is bullshit, at least so far. An automation revolution driven by a new kind of automation tech should induce changes in the total amount and rate of automation, and in which kinds of jobs get more automated. But looking at all U.S. jobs 1999-2019, we find no change whatsoever in the kinds of jobs more likely to be automated. We don’t even see a net change in overall level of automation, though language habits may be masking such changes. And having a job get more automated is not correlated at all with changes in its pay or employment. (There may be effects in narrow categories, like jobs that use robots, but nothing visible at the overall level of all automation.)

Two metrics created by groups trying to predict which jobs will get automated soon did predict past automaton, but not after we included 25 mundane job features like Pace Determined By Speed Of Equipment and Importance of Repeating Same Tasks, which together predict over half of the variance in job automation. The main change over the last two decades may be that job tasks have gradually become more suitable for automation, because nearby tasks have become automated.

Our paper has so far received zero media attention, even though it contradicts a lot of quite high visibility media hype, which continues on at the same rate. It has now been officially published in a respected peer reviewed journal: Economics Letters. Will that induce more media coverage? Probably not, as most of those other papers got media attention before they were peer reviewed. The patterns seems to be that hype gets covered, contradictory deflations of hype do not. Unless of course the deflation comes from someone prestigious enough.

For Economics Letters we had to greatly compress the paper. Here is the new 40 word abstract:

Wages and employment predict automation in 832 U.S. jobs, 1999 to 2019, but add little to top 25 O*NET job features, whose best predictive model did not change over this period. Automation changes predict changes in neither wages nor employment.

And Highlights:

  • 25 simple job features explain over half the variance in which jobs are how automated.
  • The strongest job automation predictor is: Pace Determined By Speed Of Equipment.
  • Which job features predict job automation how did not change from 1999 to 2019.
  • Jobs that get more automated do not on average change in pay or employment.
  • Labor markets change more often due to changes in demand, relative to supply.
GD Star Rating
loading...
Tagged as: , ,

Automation: So Far, Business As Usual

Since at least 2013, many have claimed that we are entering a big automation revolution, and so should soon expect to see large trend-deviating increases in job automation levels, in related job losses, and in patterns of which jobs are more automated.

For example, in the October 15 Democratic debate between 12 U.S. presidential candidates, 6 of them addressed automation concerns introduced via this moderator’s statement:

According to a recent study, about a quarter of American jobs could be lost to automation in just the next ten years.

Most revolutions do not appear suddenly or fully-formed, but instead grow from precursor trends. Thus we might hope to test this claim of an automation revolution via a broad study of recent automation.

My coauthor Keller Scholl and I have just released such a study. We use data on 1505 expert reports regarding the degree of automation of 832 U.S. job types over the period 1999-2019, and similar reports on 153 other job features, to try to address these questions:

  1. Is automation predicted by two features suggested by basic theory: pay and employment?
  2. Do expert judgements on which particular jobs are vulnerable to future automation predict which jobs were how automated in the recent past?
  3. How well can we predict each job’s recent degree of automation from all available features?
  4. Have the predictors of job automation changed noticeably over the last two decades?
  5. On average, how much have levels of job automation changed in the last two decades?
  6. Do changes in job automation over the last two decades predict changes in pay or employment for those jobs?
  7. Do other features, when interacted with automation, predict changes in pay or employment?

Bottom line: we see no signs of an automation revolution. From our paper‘s conclusion:

We find that both wages and employment predict automation in the direction predicted by simple theory. We also find that expert judgements on which jobs are more vulnerable to future automation predict which jobs have been how automated recently. Controlling for such factors, education does not seem to predict automation.

However, aside perhaps from education, these factors no longer help predict automation when we add (interpolated extensions of) the top 25 O*NET variables, which together predict over half the variance in reported automation. The strongest O*NET predictor is Pace Determined By Speed Of Equipment and most predictors seem understandable in terms of traditional mechanical styles of job automation.

We see no significant change over our time period in the average reported automation levels, or in which factors best predict those levels. However, we can’t exclude the possibility of drifting standards in expert reports; if so, automation may have increased greatly during this period. The main change that we can see is that job factors have become significantly more suitable for automation, by enough to raise automation by roughly one third of a standard deviation.

Changes in pay and employment tend to predict each other, suggesting that labor market changes tend more to be demand instead of supply changes. These changes seem weaker when automation increases. Changes in job automation do not predict changes in pay or employment; the only significant term out of six suggests that employment increases with more automation. Falling labor demand correlates with rising job education levels.

None of these results seem to offer much support for claims that we are in the midst of a trend-deviating revolution in levels of job automation, related job losses, or in the factors that predict job automation. If such a revolution has begun, it has not yet noticeably influenced this sort of data, though continued tracking of such data may later reveal such a revolution. Our results also offer little support for claims that a trend-deviating increase in automation would be accompanied by large net declines in pay or employment. Instead, we estimate that more automation mainly predicts weaker demand, relative to supply, fluctuations in labor markets.

GD Star Rating
loading...
Tagged as: , , ,

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
loading...
Tagged as: , , ,

Chiang’s Exhalation

Ted Chiang’s new book Exhalation has received rave reviews. WSJ says “sci-fi for philosophers”, and the Post says “uniformly notable for a fusion of pure intellect and molten emotion.” The New Yorker says

Chiang spends a good deal of time describing the science behind the device, with an almost Rube Goldbergian delight in elucidating the improbable.

Vox says:

Chiang is thoughtful about the rules of his imagined technologies. They have the kind of precise, airtight internal logic that makes a tech geek shiver with happiness: When Chiang tells you that time travel works a certain way, he’ll always provide the scientific theory to back up what he’s written, and he will never, ever veer away from the laws he’s set for himself.

That is, they all seem to agree that Chiang is unusually realistic and careful in his analysis.

I enjoyed Exhalation, as I have Chiang’s previous work. But as none of the above reviews (nor any of 21 Amazon reviews) make the point, it apparently falls to me to say that this realism and care is limited to philosophy and “hard” science. Re social science, most of these stories are not realistic.

Perhaps Chiang is well aware of this; his priority may be to paint the most philosophically or morally dramatic scenarios, regardless of their social realism. But as reviewers seem to credit his stories with social realism, I feel I should speak up. To support my claims, I’m going to have to give “spoilers”; you are warned. Continue reading "Chiang’s Exhalation" »

GD Star Rating
loading...
Tagged as: ,

Expand vs Fight in Social Justice, Fertility, Bioconservatism, & AI Risk

Most people talk too much about values relative to facts, as they care more about showing off their values than about learning facts. So I usually avoid talking values. But I’ll make an exception today for this value: expanding rather than fighting about possibilities.

Consider the following graph. On the x-axis you, or your group, get more of what you want. On the y-axis, others get more of what they want. (Of course each axis really represents a high dimensional space.) The blue region is a space of possibilities, the blue curve is the frontier of best possibilities, and the blue dot is the status quo, which happens if no one tries to change it.

In this graph, there are two basic ways to work to get more of what you want: move along the frontier (FIGHT), or expand it (EXPAND). While expanding the frontier helps both you and others, moving along the frontier helps you at others’ expense.

All else equal, I prefer expanding over fighting, and I want stronger norms for this. That is, I want our norms to, all else equal, more praise expansion and shame fighting. This isn’t to say I want all forms of fighting to be shamed, or shamed equally, or want all kinds of expansion to get equal praise. For example, it makes sense to support some level of “fighting back” in response to fights initiated by others. But on average, we should all expect to be better off when our efforts are on averaged directed more toward expanding than fighting. Fighting should be suspicious, and need justification, relative to expansion.

This distinction between expanding and fighting is central to standard economic analysis. We economists distinguish “efficiency improving” policies that expand possibilities from “redistribution” policies that take from some to give to others, and also from “rent-seeking” efforts that actually cut possibilities. Economists focus on promoting efficiency and discouraging rent-seeking. If we take positions on redistribution, we tend to call those “non-economic” positions.

We economists can imagine an ideal competitive market world. The world we live in is not such a world, at least not exactly, but it helps to see what would happen in such a world. In this ideal world, property rights are strong, we each own stuff, and we trade with each other to get more of what we want. The firms that exist are the ones that are most effective at turning inputs into desired outputs. The most cost-effective person is assigned to each job, and each customer buys from their most cost-effective supplier. Consumers, investors, and workers can make trades across time, and innovations happen at the most cost-effective moment.

In this ideal world, we maximize the space of possibilities by allowing all possible competition and all possible trades. In that case, all expansions are realized, and only fights remain. But in other more realistic worlds many “market failures” (and also government failures) pull back the frontier of possibilities. So we economists focus on finding actions and policies that can help fix such failures. And in some sense, I want everyone to share this pro-expansion anti-fight norm of economists.

Described in this abstract way, few may object to what I’ve said so far. But in fact most people find a lot more emotional energy in fights. Most people are quickly bored with proposals that purport to help everyone without helping any particular groups more than others. They get similarly bored with conversations framed as collecting and sharing relevant information. They instead get far more energized by efforts to help us win against them, including conversations framed as arguing with and even yelling at enemies. We actually tend to frame most politics and morality as fights, and we like it that way.

For example, much “social justice” energy is directed toward finding, outing, and “deplatforming” enemies. Yes, when social norms are efficient, enforcing such norms against violators can enhance efficiency. But our passions are nearly as strong when enforcing inefficient norms or norm-like agendas, just as a crime dramas are nearly as exciting when depicting the enforcement of bad crime laws or non-law vendettas. Our energy comes from the fights, not some indirect social benefit resulting from such fights. And we find it way too easy to just presume that the goals of our social factions are very widely shared and efficient norms.

Consider fertility and education. Many people get quite energized on the topic of whether others are having too many or not enough kids, and on whether they are raising those kids correctly. We worry about which nations, religions, classes, intelligence levels, mental illness categories, or political allegiances are having more kids, or getting more kids to be educated or trained in their favored way. And we often seek government policies to push our favored outcomes. Such as sterilizing the mentally ill, or requiring schools to teach our favored ideologies.

But in an ideal competitive world, each family picks how many kids to have and how to raise them. If other people have too many kids and and have trouble feeding them, that’s their problem, not yours. Same for if they choose to train their kids badly, or if those kids are mentally ill. Unless you can identify concrete and substantial market failures that tend to induce the choices you don’t like, and which are plausibly the actual reason for your concerns here, you should admit you are more likely engaged in fights, not in expansion efforts, when arguing on fertility and education.

And it isn’t enough to note that we are often inclined to supply medicine, education, or food collectively. If such collective actions are your main excuse for trying to control other folks’ related choices, maybe you should consider not supplying such things collectively. It also isn’t enough to note the possibility of meddling preferences, wherein you care directly about others’ choices. Not only is evidence of such preferences often weak, but meddling preferences don’t usually change the possibility frontier, and thus don’t change which policies are efficient. Beware the usual human bias to try to frame fighting efforts as more pro-social expansion efforts, and to make up market failure explanations in justification.

Consider bioconservatism. Some look forward to a future where they’ll be able to change the human body, adding extra senses, and modifying people to be smarter, stronger, more moral, and even immortal. Others are horrified by and want to prevent such changes, fearing that such “post-humans” would no longer be human, and seeing societies of such creatures as “repugnant” and having lost essential “dignities”. But again, unless you can identify concrete and substantial market failures that would result from such modifications, and that plausibly drive your concern, you should admit that you are engaged in a fight here.

It seems to me that the same critique applies to most current AI risk concerns. Back when my ex-co-blogger Eliezer Yudkowsky and I discussed his AI risk concerns here on this blog (concerns that got much wider attention via Nick Bostrom’s book), those concerns were plausibly about a huge market failure. Just as there’s an obvious market failure in letting someone experiment with nuclear weapons in their home basement near a crowded city (without holding sufficient liability insurance), there’d be an obvious market failure from letting a small AI team experiment with software that might, in a weekend, explode to become a superintelligence that enslaved or destroyed the world. While I see that scenario as pretty unlikely, I grant that it is a market failure scenario. Yudkowsky and Bostrom aren’t fighting there.

But when I read and talk to people today about AI risk, I mostly hear people worried about local failures to control local AIs, in a roughly competitive world full of many AI systems with reasonably strong property rights. In this sort of scenario, each person or firm that loses control of an AI would directly suffer from that loss, while others would suffer far less or not at all. Yet AI risk folks say that they fear that many or even most individuals won’t care enough to try hard enough to keep sufficient control of their AIs, or to prevent those AIs from letting their expressed priorities drift as contexts change over the long run. Even though such AI risk folks don’t point to particular market failures here. And even though such advanced AI systems are still a long ways off, and we’ll likely know a lot more about, and have plenty of time to deal with, AI control problems when such systems actually arrive.

Thus most current AI risk concerns sound to me a lot like fertility, education, and bioconservatism concerns. People say that it is not enough to control their own fertility, the education of their own kids, the modifications of their own bodies, and the control of their own AIs. They worry instead about what others may do with such choices, and seek ways to prevent the “risk” of others making bad choices. And in the absence of identified concrete and substantial market failures associated with such choices, I have to frame this as an urge to fight, instead of to expand the space of possibilities. And so according to the norms I favor, I’m suspicious of this activity, and not that eager to promote it.

GD Star Rating
loading...
Tagged as: , , ,

How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
loading...
Tagged as: , ,

How Does Brain Code Differ?

The Question

We humans have been writing “code” for many decades now, and as “software eats the world” we will write a lot more. In addition, we can also think of the structures within each human brain as “code”, code that will also shape the future.

Today the code in our heads (and bodies) is stuck there, but eventually we will find ways to move this code to artificial hardware. At which point we can create the world of brain emulations that is the subject of my first book, Age of Em. From that point on, these two categories of code, and their descendant variations, will have near equal access to artificial hardware, and so will compete on relatively equal terms to take on many code roles. System designers will have to choose which kind of code to use to control each particular system.

When designers choose between different types of code, they must ask themselves: which kinds of code are more cost-effective in which kinds of applications? In a competitive future world, the answer to this question may be the main factor that decides the fraction of resources devoted to running human-like minds. So to help us envision such a competitive future, we should also ask: where will different kinds of code work better? (Yes, non-competitive futures may be possible, but harder to arrange than many imagine.)

To think about which kinds of code win where, we need a basic theory that explains their key fundamental differences. You might have thought that much has been written on this, but alas I can’t find much. I do sometimes come across people who think it obvious that human brain code can’t possibly compete well anywhere, though they rarely explain their reasoning much. As this claim isn’t obvious to me, I’ve been trying to think about this key question of which kinds of code wins where. In the following, I’ll outline what I’ve come up with. But I still hope someone will point me to useful analyses that I’ve missed.

In the following, I will first summarize a few simple differences between human brain code and other code, then offer a deeper account of these differences, then suggest an empirical test of this account, and finally consider what these differences suggest for which kinds of code will be more cost-effective where. Continue reading "How Does Brain Code Differ?" »

GD Star Rating
loading...
Tagged as: , , ,