Tag Archives: AI

AGI Is Sacred

Sacred things are especially valuable, sharply distinguished, and idealized as having less decay, messiness, inhomogeneities, or internal conflicts. We are not to mix the sacred (S) with the non-sacred (NS), nor to trade S for NS. Thus S should not have clear measures or money prices, and we shouldn’t enforce rules that promote NS at S expense.

We are to desire S “for itself”, understand S intuitively not cognitively, and not choose S based on explicit calculation or analysis. We didn’t make S; S made us. We are to trust “priests” of S, give them more self-rule and job tenure, and their differences from us don’t count as “inequality”. Objects, spaces, and times can become S by association. (More)

When we treat something as sacred, we acquire the predictably extreme related expectations and values characteristic of our concept of “sacred”. This biases us in the usual case where such extremes are unreasonable. (To min such biases, try math as sacred.)

For example, most ancient societies had a great many gods, with widely varying abilities, features, and inclinations. And different societies had different gods. But while the ancients treated these gods as pretty sacred, Christians (and Jews) upped the ante. They “knew” from their God’s recorded actions that he was pretty long-lasting, powerful, and benevolent. But they moved way beyond those “facts” to draw more extreme, and thus more sacred, conclusions about their God.

For example, Christians came to focus on a single uniquely perfect God: eternal, all-powerful, all-good, omnipresent, all-knowing (even re the future), all-wise, never-changing, without origin, self-sufficient, spirit-not-matter, never lies nor betrays trust, and perfectly loving, beautiful, gracious, kind, and pretty much any other good feature you can name. The direction, if not always the magnitude, of these changes is well predicted by our sacredness concept.

It seems to me that we’ve seen a similar process recently regarding artificial intelligence. I recall that, decades ago, the idea that we could make artificial devices who could do many of the kinds of tasks that humans do, even if not quite as well, was pretty sacred. It inspired much reverence, and respect for its priests. But just as Christians upped the ante regarding God, many recently have upped the AI ante, focusing on an even more sacred variation on AI, namely AGI: artificial general intelligence.

The default AI scenario, the one that most straightforwardly projected past trends into the future, would go as follows. Many kinds of AI systems would specialize in many different tasks, each built and managed by different orgs. There’d also be a great many AI systems of each type, controlled by competing organizations, of roughly comparable cost-effectiveness.

Overall, the abilities of these AI would improve at roughly steady rates, with rate variations similar to what we’ve seen over the last seventy years. Individual AI systems would be introduced, rise in influence for a time, and then decline in influence, as they rotted and become obsolete relative to rivals. AI systems wouldn’t work equally well with all other systems, but would instead have varying degrees of compatibility and integration.

The fraction of GDP paid for such systems would increase over time, and this would likely lead to econ growth rate increases, perhaps very large ones. Eventually many AI systems would reach human level on many tasks, but then continue to improve. Different kinds of system abilities would reach human level at different times. Even after this point, most all AI activity would be doing relatively narrow tasks.

The upped-ante version of AI, namely AGI, instead changes this scenario in the direction of making it more sacred. Compared to AI, AGI is idealized, sharply distinguished from other AI, and associated with extreme values. For example:

1) Few discussions of AGI distinguish different types of them. Instead, there is usually just one unspecialized type of AGI, assumed to be at least as good as humans at absolutely everything.

2) AGI is not a name (like “economy” or “nation”) for a diverse collection of tools run by different orgs, tools which can all in principle be combined, but not always easily. An AGI is instead seen as a highly integrated system, fully and flexibly able to apply any subset its tools to any problem, without substantial barriers such as ownership conflicts, different representations, or incompatible standards.

3) An AGI is usually seen as a consistent and coherent ideal decision agent. For example, its beliefs are assumed all consistent with each other, fully updated on all its available info, and its actions are all part of a single coherent long-term plan. Humans greatly deviate from this ideal.

4) Unlike most human organizations, and many individual humans, AGIs are assumed to have no internal conflicts, where different parts work at cross purposes, struggling for control over the whole. Instead, AGIs can last forever maintaining completely reliable internal discipline.

5) Today virtually all known large software systems rot. That is, as they are changed to add features and adapt to outside changes, they gradually become harder to usefully modify, and are eventually discarded and replaced by new systems built from scratch. But an AGI is assumed to suffer no such rot. It can instead remain effective forever.

6) AGIs can change themselves internally without limit, and have sufficiently strong self-understanding to apply this ability usefully to all of their parts. This ability does not suffer from rot. Humans and human orgs are nothing like this.

7) AGIs are usually assumed to have a strong and sharp separation between a core “values” module and all their other parts. It is assumed that value tendencies are not in any way encoded into the other many complex and opaque modules of an AGI system. The values module can be made frozen and unchanging at no cost to performance, even in the long run, and in this way an AGI’s values can stay constant forever.

8) AGIs are often assumed to be very skilled, even perfect, at cooperating with each other. Some say that is because they can show each other their read-only values modules. In this case, AGI value modules are assumed to be small, simple, and standardized enough to be read and understood by other AGIs.

9) Many analyses assume there is only one AGI in existence, with all other humans and artificial systems at the time being vastly inferior. In fact this AGI is sometimes said to be more capable than the entire rest of the world put together. Some justify this by saying multiple AGIs cooperate so well as to be in effect a single AGI.

10) AGIs are often assumed to have unlimited powers of persuasion. They can convince humans, other AIs, and organizations of pretty much any claim, even claims that would seem to be strongly contrary to their interests, and even if those entities are initially quite wary and skeptical of the AGI, and have AI advisors.

11) AGIs are often assumed to have unlimited powers of deception. They could pretend to have one set of values but really have a completely different set of values, and completely fool the humans and orgs that developed them ever since they grew up from a “baby” AI. Even when those had AI advisors. This super power of deception apparently applies only to humans and their organizations, but not to other AGIs.

12) Many analyses assume a “foom” scenario wherein this single AGI in existence evolves very quickly, suddenly, and with little warning out of far less advanced AIs who were evolving far more slowly. This evolution is so fast as to prevent the use of trial and error to find and fix its problematic aspects.

13) The possible sudden appearance, in the not-near future, of such a unique powerful perfect creature, is seen by many as event containing overwhelming value leverage, for good or ill. To many, trying to influence this event is our most important and praise-worthy action, and its priests are the most important people to revere.

I hope you can see how these AGI idealizations and values follow pretty naturally from our concept of the sacred. Just as that concept predicts the changes that religious folks seeking a more sacred God made to their God, it also predicts that AI fans seeking a more sacred AI would change it in these directions, toward this sort of version of AGI.

I’m rather skeptical that actual future AI systems, even distant future advanced ones, are well thought of as having this package of extreme idealized features. The default AI scenario I sketched above makes more sense to me.

Added 7a: In the above I’m listing assumptions commonly made about AGI, not just applying a particular definition of AGI.

GD Star Rating
loading...
Tagged as: ,

Why Not Wait On AI Risk?

Years ago when the AI risk conversation was just starting, I was a relative skeptic, but I was part of the conversation. Since then, the conversation has become much larger, but I seem no longer part of it; it seems years since others in this convo engaged me on it.

Clearly most who write on this do not sit close to my views, though I may sit closer to most who’ve considered getting into this topic, but instead found better things to do. (Far more resources are available to support advocates than skeptics.) So yes, I may be missing something that they all get. Furthermore, I’ve admittedly only read a small fraction of the huge amount since written in this area. Even so, I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

The future AI scenario that treats “AI” most like prior wide tech categories (e.g., “energy” or “transport”) goes as follows. AI systems are available from many competing suppliers at similar prices, and their similar abilities increase gradually over time. Abilities don’t increase faster than customers can usefully apply them. Problems are mostly dealt with as they appear, instead of anticipated far in advance. Such systems slowly displace humans on specific tasks, and are on average roughly as task specialized as humans are now. AI firms distinguish themselves via the different tasks their systems do.

The places and groups who adopt such systems first are those flexible and rich enough to afford them, and having other complementary capital. Those who invest in AI capital on average gain from their investments. Those who invested in displaced capital may lose, though over the last two decades workers at more automated jobs have not seen any average effect on their wages or number of workers. AI today is only a rather minor contribution to our economy (<5%), and it has quite a long way to go before it can make a large contribution. We today have only vague ideas of what AIs that made a much larger contribution would look like.

Today most of the ways that humans help and harm each other are via our relations. Such as: customer-supplier, employer-employee, citizen-politician, defendant-plaintiff, friend-friend, parent-child, lover-lover, victim-criminal-police-prosecutor-judge, army-army, slave-owner, and competitors. So as AIs replace humans in these roles, the main ways that AIs help and hurt humans are likely to also be via these roles.

Our usual story is that such hurt is limited by competition. For example, each army is limited by all the other armies that might oppose it. And your employer and landlord are limited in exploiting you by your option to switch to other employers and landlords. So unless AI makes such competition much less effective at limiting harms, it is hard to see how AI makes role-mediated harms worse. Sure smart AIs might be smarter than humans, but they will have other AI competitors and humans will have AI advisors. Humans don’t seem much worse off in the last few centuries due to firms and governments who are far more intelligent than individual humans taking over many roles.

AI risk folks are especially concerned with losing control over AIs. But consider, for example, an AI hired by a taxi firm to do its scheduling. If such an AI stopped scheduling passengers to be picked up where they waited and delivered to where they wanted to go, the firm would notice quickly, and could then fire and replace this AI. But what if an AI who ran such a firm became unresponsive to its investors. Or if an AI who ran an army becoming unresponsive to its oversight government? In both cases, while such investors or governments might be able to cut off some outside supplies of resources, the AI might do substantial damage before such cutoffs bled it dry.

However, our world today is well acquainted with the prospect of “coups” wherein firm or army management becomes unresponsive to its relevant owners. Not only do our usual methods usually seem sufficient to the task, we don’t see much of an externality re these problems. You try to keep your firm under control, and I try to keep mine, but I’m not especially threatened by your losing control of yours. We care a bit more about others losing control of their cars, planes, or nuclear power plants, as those might hurt bystanders. But we care much less once such others show us sufficient liability, and liability insurance, to cover our losses in these cases.

I don’t see why I should be much more worried about your losing control of your firm, or army, to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been the key to controlling past systems, and money invested now, instead of spent on analysis now, gives us far more money to spend on analysis later.

All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because, hey, anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why they expect advances at the scope of AGI to be far more lumpy than we’ve seen for most techs.

Yudkowsky paints a “foom” picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI with very powerful general abilities appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks and balances.

These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. These would be big breakthroughs relative to the broad category of “mental tasks”, and thus even bigger than if we found big breakthroughs relative to the less broad tech categories of “energy”, “transport”, or “shelter”. Yes of course change is often lumpy if we look at small tech scopes, but lumpy local changes aggregate into smoother change over wider scopes.

As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.

I don’t mind groups with small relative budgets exploring scenarios with proportionally small chances, but I lament such a large fraction of those willing to take the long term future seriously using this as their default AI scenario. And while I get why people like Yudkowsky focus on scenarios in which they fervently believe, I am honestly puzzled why so many AI risk experts seem to repudiate his extreme scenarios, and yet still see AI risk as a terribly important project to pursue right now. If AI isn’t unusually lumpy, then why are early efforts at AI control design especially valuable?

So far I’ve mentioned two widely expressed AI concerns. First, AIs may hurt human workers by displacing them, and second, AIs may start coups wherein they wrest control of some resources from their owners. A third widely expressed concern is that the world today may be stable, and contain value, only due to somewhat random and fragile configurations of culture, habits, beliefs, attitudes, institutions, values, etc. If so, our world may break if this stuff drifts out of a safe and stable range for such configurations. AI might be or facilitate such a change, and by helping to accelerate change, AI might accelerate the rate of configuration drift.

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or allowing many other specific transformative techs, like genetic engineering, fusion energy, social media, or space. Or other big social changes, like gay marriage.

Many have deep and reasonable fears regarding big long-term changes. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me much more about change in general than about AI in particular. People focused on these concerns should be looking to stop or greatly limit and slow change in general, and not focus so much on AI. Big change can also happen without AI.

So what am I missing? Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?

GD Star Rating
loading...
Tagged as: , ,

Foom Update

To extend our reach, we humans have built tools, machines, firms, and nations. And as these are powerful, we try to maintain control of them. But as efforts to control them usually depend on their details, we have usually waited to think about how to control them until we had concrete examples in front of us. In the year 1000, for example, there wasn’t much we could do to usefully think about how to control most things that have only appeared in the last two centuries, such as cars or international courts.

Someday we will have far more powerful computer tools, including “advanced artificial general intelligence” (AAGI), i.e., with capabilities even higher and broader than those of individual human brains today. And some people today spend substantial efforts today worrying about how we will control these future tools. Their most common argument for this unusual strategy is “foom”.

That is, they postulate a single future computer system, initially quite weak and fully controlled by its human sponsors, but capable of action in the world and with general values to drive such action. Then over a short time (days to weeks) this system dramatically improves (i.e., “fooms”) to become an AAGI far more capable even than the sum total of all then-current humans and computer systems. This happens via a process of self-reflection and self-modification, and this self-modification also produces large and unpredictable changes to its effective values. They seek to delay this event until they can find a way to prevent such dangerous “value drift”, and to persuade those who might initiate such an event to use that method.

I’ve argued at length (1 2 3 4 5 6 7) against the plausibility of this scenario. Its not that its impossible, or that no one should work on it, but that far too many take it as a default future scenario. But I haven’t written on it for many years now, so perhaps it is time for an update. Recently we have seen noteworthy progress in AI system demos (if not yet commercial application), and some have urged me to update my views as a result.

The recent systems have used relative simple architectures and basic algorithms to produce models with enormous numbers of parameters from very large datasets. Compared to prior systems, these systems have produced impressive performance on an impressively wide range of tasks. Even though they are still quite far from displacing humans in any substantial fraction of their current tasks.

For the purpose of reconsidering foom, however, the key things to notice are: (1) these systems have kept their values quite simple and very separate from the rest of the system, and (2) they have done basically zero self-reflection or self-improvement. As I see AAGI as still a long way off, the features of these recent systems can only offer weak evidence regarding the features of AAGI.

Even so, recent developments offer little support for the hypothesis that AAGI will be created soon via the process of self-reflection and self-improvement, or for the hypothesis that such a process risks large “value drifts”. These current ways that we are now moving toward AAGI just don’t look much like the foom scenario. And I don’t see them as saying much about whether ems or AAGI will appear first.

Again, I’m not saying foom is impossible, just that it looks unlikely, and that recent events haven’t made it seem moreso.

These new systems do suggest a substantial influence of architecture on system performance, though not obviously at a level out of line with that in most prior AI systems. And note that the abilities of the very best systems here are not that much better than that of the 2nd and 3rd best systems, arguing weakly against AAGI scenarios where the best system is vastly better.

GD Star Rating
loading...
Tagged as:

AI Language Progress

Brains first evolved to do concrete mental tasks, like chasing prey. Then language evolved, to let brains think together, such as on how to chase prey together. Words are how we share thoughts.

So we think a bit, say some words, they think a bit, they say some words, and so on. Each time we hear some words we update our mental model on their thoughts, which also updates us about the larger world. Then we think some more, drawing more conclusions about the world, and seek words that, when said, help them to draw similar conclusions. Along the way, mostly as a matter of habit, we judge each other’s ability to think and talk. Sometimes we explicit ask questions, or assign small tasks, which we expect to be especially diagnostic of relevant abilities in some area.

The degree to which such small task performance is diagnostic of abilities re the more human fundamental task of thinking together varies a lot. It depends, in part, on how much people are rewarded merely for passing those tests, and how much time and effort they can focus on learning to pass tests. We teachers are quite familiar with such “teaching to the test”, and it is often a big problem. There are many topics that we don’t teach much because we see that we just don’t have good small test tasks. And arguably schools actually fail most of the time; they arguably pretend to teach many things but mostly just rank students on general abilities to learn to pass tests, and inclinations to do what they are told. Abilities which can predict job performance.

Which brings us to the topic of recent progress in machine learning. Google just announced its PaLM system, which fit 540 billion parameters to a “high-quality corpus of 780 billion tokens that represent a wide range of natural language use cases”, in order to predict from past words the next words appropriate for a wide range of small language tasks. Its performance is impressive; it does well compared to humans on a wide range of such tasks. And yet it still basically “babbles“; it seems not remotely up to the task of thinking together with a human. If you talked with it for a long time, you might well find ways that it could help you. But still, it wouldn’t think with you.

Maybe this problem will be solved by just adding more parameters and data. But I doubt it. I expect that a bigger problem is that such systems have been training at these small language tasks, instead of at the more fundamental task of thinking together. Yes, most of the language data on which they are built is from conversations where humans were thinking together. So they can learn well to say the next small thing in such a conversation. But they seem to be failing to infer the deeper structures that support shared thinking among humans.

It might help to assign such a system the task of “useful inner monologue”. That is, it would start talking to itself, and keep talking indefinitely, continually updating its representations from the data of its internal monologue. The trick would be to generate these monologues and do this update so that the resulting system got better at doing other useful tasks. (I don’t know how to arrange this.) While versions of this approach have been tried before, the fact that this isn’t the usual approach suggests that it doesn’t now produce gains as fast, at least for doing these small language tasks. Even so, if those are misleading metrics, this approach might help more to get real progress at artificial thinking.

I will sit up and take notice when the main improvements to systems with impressive broad language abilities come from such inner monologues, or from thinking together on other useful tasks. That will look more like systems that have learned how to think. And when such abilities work across a wide scope of topics, that will look to me more like the proverbial “artificial general intelligence”. But I still don’t expect to see that for a long time. We see progress, but the road ahead is still quite long.

GD Star Rating
loading...
Tagged as:

Russell’s Human Compatible

My school turned back on its mail system as we start a new semester, and a few days ago out popped Stuart Russell’s book Human Compatible (published last Oct.), with a note inside dated March 31. Here’s my review, a bit late as a result.

Let me focus first on what I see as its core thesis, and then discuss less central claims.

Russell seems to say that we still have a lot of time, and that he’s only asking for a few people to look into the problem:

The arrival of super intelligence AI is inherently unpredictable. … My timeline of, say eighty years is considerably more conservative than that of the typical AI researcher. … If just one conceptual breakthrough were needed, …superintelligent AI in some form could arrive quite suddenly. The chances are that we would be unprepared: if we built superintelligent machines with any degree of autonomy, we would soon find ourselves unable to control them. I’m, however, fairly confident that wee have some breathing space because there are several major breakthroughs needed between here and superintelligence, not just one. (pp.77-78)

Scott Alexander … summed it up brilliantly: … The skeptic’s position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research. The “believers,” meanwhile [take exactly the same position.] (pp.169-170)

Yet his ask is actually much larger: unless we all want to die, AI and related disciplines must soon adopt a huge and expensive change to their standard approach: we must stop optimizing using simple fixed objectives, like the way a GPS tries to minimize travel time, or a trading program tries to maximize profits. Instead we must make systems that attempt to look at all the data on what all humans have ever done to infer a complex continually-updated integrated representation of all human preferences (and meta-preferences) over everything, and use that complex representation to make all automated decisions. Modularity be damned: Continue reading "Russell’s Human Compatible" »

GD Star Rating
loading...
Tagged as: , ,

No Recent Automation Revolution

Unless you’ve been living under a rock, you know that for many years the media has been almost screaming that we entering a big automation revolution, with huge associated job losses, due to new AI tech, especially deep learning. The media has cited many “experts” making such claims, most every management consulting firm has felt compelled to issue a related report, and the subject came up in the Democratic US presidential debates.

Last December, Keller Scholl and I posted a working paper suggesting that this whole narrative is bullshit, at least so far. An automation revolution driven by a new kind of automation tech should induce changes in the total amount and rate of automation, and in which kinds of jobs get more automated. But looking at all U.S. jobs 1999-2019, we find no change whatsoever in the kinds of jobs more likely to be automated. We don’t even see a net change in overall level of automation, though language habits may be masking such changes. And having a job get more automated is not correlated at all with changes in its pay or employment. (There may be effects in narrow categories, like jobs that use robots, but nothing visible at the overall level of all automation.)

Two metrics created by groups trying to predict which jobs will get automated soon did predict past automaton, but not after we included 25 mundane job features like Pace Determined By Speed Of Equipment and Importance of Repeating Same Tasks, which together predict over half of the variance in job automation. The main change over the last two decades may be that job tasks have gradually become more suitable for automation, because nearby tasks have become automated.

Our paper has so far received zero media attention, even though it contradicts a lot of quite high visibility media hype, which continues on at the same rate. It has now been officially published in a respected peer reviewed journal: Economics Letters. Will that induce more media coverage? Probably not, as most of those other papers got media attention before they were peer reviewed. The pattern seems to be that hype gets covered, contradictory deflations of hype do not. Unless of course the deflation comes from someone prestigious enough.

For Economics Letters we had to greatly compress the paper. Here is the new 40 word abstract:

Wages and employment predict automation in 832 U.S. jobs, 1999 to 2019, but add little to top 25 O*NET job features, whose best predictive model did not change over this period. Automation changes predict changes in neither wages nor employment.

And Highlights:

  • 25 simple job features explain over half the variance in which jobs are how automated.
  • The strongest job automation predictor is: Pace Determined By Speed Of Equipment.
  • Which job features predict job automation how did not change from 1999 to 2019.
  • Jobs that get more automated do not on average change in pay or employment.
  • Labor markets change more often due to changes in demand, relative to supply.
GD Star Rating
loading...
Tagged as: , ,

Automation: So Far, Business As Usual

Since at least 2013, many have claimed that we are entering a big automation revolution, and so should soon expect to see large trend-deviating increases in job automation levels, in related job losses, and in patterns of which jobs are more automated.

For example, in the October 15 Democratic debate between 12 U.S. presidential candidates, 6 of them addressed automation concerns introduced via this moderator’s statement:

According to a recent study, about a quarter of American jobs could be lost to automation in just the next ten years.

Most revolutions do not appear suddenly or fully-formed, but instead grow from precursor trends. Thus we might hope to test this claim of an automation revolution via a broad study of recent automation.

My coauthor Keller Scholl and I have just released such a study. We use data on 1505 expert reports regarding the degree of automation of 832 U.S. job types over the period 1999-2019, and similar reports on 153 other job features, to try to address these questions:

  1. Is automation predicted by two features suggested by basic theory: pay and employment?
  2. Do expert judgements on which particular jobs are vulnerable to future automation predict which jobs were how automated in the recent past?
  3. How well can we predict each job’s recent degree of automation from all available features?
  4. Have the predictors of job automation changed noticeably over the last two decades?
  5. On average, how much have levels of job automation changed in the last two decades?
  6. Do changes in job automation over the last two decades predict changes in pay or employment for those jobs?
  7. Do other features, when interacted with automation, predict changes in pay or employment?

Bottom line: we see no signs of an automation revolution. From our paper‘s conclusion:

We find that both wages and employment predict automation in the direction predicted by simple theory. We also find that expert judgements on which jobs are more vulnerable to future automation predict which jobs have been how automated recently. Controlling for such factors, education does not seem to predict automation.

However, aside perhaps from education, these factors no longer help predict automation when we add (interpolated extensions of) the top 25 O*NET variables, which together predict over half the variance in reported automation. The strongest O*NET predictor is Pace Determined By Speed Of Equipment and most predictors seem understandable in terms of traditional mechanical styles of job automation.

We see no significant change over our time period in the average reported automation levels, or in which factors best predict those levels. However, we can’t exclude the possibility of drifting standards in expert reports; if so, automation may have increased greatly during this period. The main change that we can see is that job factors have become significantly more suitable for automation, by enough to raise automation by roughly one third of a standard deviation.

Changes in pay and employment tend to predict each other, suggesting that labor market changes tend more to be demand instead of supply changes. These changes seem weaker when automation increases. Changes in job automation do not predict changes in pay or employment; the only significant term out of six suggests that employment increases with more automation. Falling labor demand correlates with rising job education levels.

None of these results seem to offer much support for claims that we are in the midst of a trend-deviating revolution in levels of job automation, related job losses, or in the factors that predict job automation. If such a revolution has begun, it has not yet noticeably influenced this sort of data, though continued tracking of such data may later reveal such a revolution. Our results also offer little support for claims that a trend-deviating increase in automation would be accompanied by large net declines in pay or employment. Instead, we estimate that more automation mainly predicts weaker demand, relative to supply, fluctuations in labor markets.

GD Star Rating
loading...
Tagged as: , , ,

Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
loading...
Tagged as: , , ,

Chiang’s Exhalation

Ted Chiang’s new book Exhalation has received rave reviews. WSJ says “sci-fi for philosophers”, and the Post says “uniformly notable for a fusion of pure intellect and molten emotion.” The New Yorker says

Chiang spends a good deal of time describing the science behind the device, with an almost Rube Goldbergian delight in elucidating the improbable.

Vox says:

Chiang is thoughtful about the rules of his imagined technologies. They have the kind of precise, airtight internal logic that makes a tech geek shiver with happiness: When Chiang tells you that time travel works a certain way, he’ll always provide the scientific theory to back up what he’s written, and he will never, ever veer away from the laws he’s set for himself.

That is, they all seem to agree that Chiang is unusually realistic and careful in his analysis.

I enjoyed Exhalation, as I have Chiang’s previous work. But as none of the above reviews (nor any of 21 Amazon reviews) make the point, it apparently falls to me to say that this realism and care is limited to philosophy and “hard” science. Re social science, most of these stories are not realistic.

Perhaps Chiang is well aware of this; his priority may be to paint the most philosophically or morally dramatic scenarios, regardless of their social realism. But as reviewers seem to credit his stories with social realism, I feel I should speak up. To support my claims, I’m going to have to give “spoilers”; you are warned. Continue reading "Chiang’s Exhalation" »

GD Star Rating
loading...
Tagged as: ,

Expand vs Fight in Social Justice, Fertility, Bioconservatism, & AI Risk

Most people talk too much about values relative to facts, as they care more about showing off their values than about learning facts. So I usually avoid talking values. But I’ll make an exception today for this value: expanding rather than fighting about possibilities.

Consider the following graph. On the x-axis you, or your group, get more of what you want. On the y-axis, others get more of what they want. (Of course each axis really represents a high dimensional space.) The blue region is a space of possibilities, the blue curve is the frontier of best possibilities, and the blue dot is the status quo, which happens if no one tries to change it.

In this graph, there are two basic ways to work to get more of what you want: move along the frontier (FIGHT), or expand it (EXPAND). While expanding the frontier helps both you and others, moving along the frontier helps you at others’ expense.

All else equal, I prefer expanding over fighting, and I want stronger norms for this. That is, I want our norms to, all else equal, more praise expansion and shame fighting. This isn’t to say I want all forms of fighting to be shamed, or shamed equally, or want all kinds of expansion to get equal praise. For example, it makes sense to support some level of “fighting back” in response to fights initiated by others. But on average, we should all expect to be better off when our efforts are on averaged directed more toward expanding than fighting. Fighting should be suspicious, and need justification, relative to expansion.

This distinction between expanding and fighting is central to standard economic analysis. We economists distinguish “efficiency improving” policies that expand possibilities from “redistribution” policies that take from some to give to others, and also from “rent-seeking” efforts that actually cut possibilities. Economists focus on promoting efficiency and discouraging rent-seeking. If we take positions on redistribution, we tend to call those “non-economic” positions.

We economists can imagine an ideal competitive market world. The world we live in is not such a world, at least not exactly, but it helps to see what would happen in such a world. In this ideal world, property rights are strong, we each own stuff, and we trade with each other to get more of what we want. The firms that exist are the ones that are most effective at turning inputs into desired outputs. The most cost-effective person is assigned to each job, and each customer buys from their most cost-effective supplier. Consumers, investors, and workers can make trades across time, and innovations happen at the most cost-effective moment.

In this ideal world, we maximize the space of possibilities by allowing all possible competition and all possible trades. In that case, all expansions are realized, and only fights remain. But in other more realistic worlds many “market failures” (and also government failures) pull back the frontier of possibilities. So we economists focus on finding actions and policies that can help fix such failures. And in some sense, I want everyone to share this pro-expansion anti-fight norm of economists.

Described in this abstract way, few may object to what I’ve said so far. But in fact most people find a lot more emotional energy in fights. Most people are quickly bored with proposals that purport to help everyone without helping any particular groups more than others. They get similarly bored with conversations framed as collecting and sharing relevant information. They instead get far more energized by efforts to help us win against them, including conversations framed as arguing with and even yelling at enemies. We actually tend to frame most politics and morality as fights, and we like it that way.

For example, much “social justice” energy is directed toward finding, outing, and “deplatforming” enemies. Yes, when social norms are efficient, enforcing such norms against violators can enhance efficiency. But our passions are nearly as strong when enforcing inefficient norms or norm-like agendas, just as a crime dramas are nearly as exciting when depicting the enforcement of bad crime laws or non-law vendettas. Our energy comes from the fights, not some indirect social benefit resulting from such fights. And we find it way too easy to just presume that the goals of our social factions are very widely shared and efficient norms.

Consider fertility and education. Many people get quite energized on the topic of whether others are having too many or not enough kids, and on whether they are raising those kids correctly. We worry about which nations, religions, classes, intelligence levels, mental illness categories, or political allegiances are having more kids, or getting more kids to be educated or trained in their favored way. And we often seek government policies to push our favored outcomes. Such as sterilizing the mentally ill, or requiring schools to teach our favored ideologies.

But in an ideal competitive world, each family picks how many kids to have and how to raise them. If other people have too many kids and and have trouble feeding them, that’s their problem, not yours. Same for if they choose to train their kids badly, or if those kids are mentally ill. Unless you can identify concrete and substantial market failures that tend to induce the choices you don’t like, and which are plausibly the actual reason for your concerns here, you should admit you are more likely engaged in fights, not in expansion efforts, when arguing on fertility and education.

And it isn’t enough to note that we are often inclined to supply medicine, education, or food collectively. If such collective actions are your main excuse for trying to control other folks’ related choices, maybe you should consider not supplying such things collectively. It also isn’t enough to note the possibility of meddling preferences, wherein you care directly about others’ choices. Not only is evidence of such preferences often weak, but meddling preferences don’t usually change the possibility frontier, and thus don’t change which policies are efficient. Beware the usual human bias to try to frame fighting efforts as more pro-social expansion efforts, and to make up market failure explanations in justification.

Consider bioconservatism. Some look forward to a future where they’ll be able to change the human body, adding extra senses, and modifying people to be smarter, stronger, more moral, and even immortal. Others are horrified by and want to prevent such changes, fearing that such “post-humans” would no longer be human, and seeing societies of such creatures as “repugnant” and having lost essential “dignities”. But again, unless you can identify concrete and substantial market failures that would result from such modifications, and that plausibly drive your concern, you should admit that you are engaged in a fight here.

It seems to me that the same critique applies to most current AI risk concerns. Back when my ex-co-blogger Eliezer Yudkowsky and I discussed his AI risk concerns here on this blog (concerns that got much wider attention via Nick Bostrom’s book), those concerns were plausibly about a huge market failure. Just as there’s an obvious market failure in letting someone experiment with nuclear weapons in their home basement near a crowded city (without holding sufficient liability insurance), there’d be an obvious market failure from letting a small AI team experiment with software that might, in a weekend, explode to become a superintelligence that enslaved or destroyed the world. While I see that scenario as pretty unlikely, I grant that it is a market failure scenario. Yudkowsky and Bostrom aren’t fighting there.

But when I read and talk to people today about AI risk, I mostly hear people worried about local failures to control local AIs, in a roughly competitive world full of many AI systems with reasonably strong property rights. In this sort of scenario, each person or firm that loses control of an AI would directly suffer from that loss, while others would suffer far less or not at all. Yet AI risk folks say that they fear that many or even most individuals won’t care enough to try hard enough to keep sufficient control of their AIs, or to prevent those AIs from letting their expressed priorities drift as contexts change over the long run. Even though such AI risk folks don’t point to particular market failures here. And even though such advanced AI systems are still a long ways off, and we’ll likely know a lot more about, and have plenty of time to deal with, AI control problems when such systems actually arrive.

Thus most current AI risk concerns sound to me a lot like fertility, education, and bioconservatism concerns. People say that it is not enough to control their own fertility, the education of their own kids, the modifications of their own bodies, and the control of their own AIs. They worry instead about what others may do with such choices, and seek ways to prevent the “risk” of others making bad choices. And in the absence of identified concrete and substantial market failures associated with such choices, I have to frame this as an urge to fight, instead of to expand the space of possibilities. And so according to the norms I favor, I’m suspicious of this activity, and not that eager to promote it.

GD Star Rating
loading...
Tagged as: , , ,