Tag Archives: Innovation

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
loading...
Tagged as: , ,

How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
loading...
Tagged as: , ,

Why Have Opinions?

I just surprised some people here at a conference by saying that I don’t have opinions on abortion or gun control. I have little use for such opinions, and so haven’t bothered to form them. Since that attitude seems to be unusual among my intellectual peers, let me explain myself.

I see four main kinds of reasons to have opinions on subjects:

  • Decisions – Sometimes I need to make concrete decisions where the best choice depends on particular key facts or values. In such cases I am forced to have opinions on those subjects, in order to make good decisions. I may well just adopt, without much reflection, the opinions of some standard expert source. I have to make a lot of decisions and don’t have much time to reflect. But even so, I must have an opinion. And my incentives here tend to be toward having true opinions.
  • Socializing – A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions. Here my incentives are to have opinions that others find interesting or loyal, which is less strongly (but not zero) correlated with truth.
  • Research – As a professional intellectual, I specialize in particular topics. On those topics I generate opinions together with detailed supporting justifications for those opinions. I am evaluated on the originality, persuasiveness, and impressiveness of these opinions and justifications. These incentives are somewhat more strongly, but still only somewhat, correlated with truth.
  • Exploration – I’m not sure what future topics to research, and so continually explore a space of related topics which seem like they might have the potential to become promising research areas for me. Part of that process of exploration involves generating tentative opinions and justifications. Here it is even less important that these opinions be true than they help reveal interesting, neglected, areas especially well-suited to my particular skills and styles.

Most topics that are appropriate for research have little in the way of personal decision impact. So intellectuals focus more on research reasons for such topics. Most intellectuals also socialize a lot, so they also generate opinions for social reasons. Alas most intellectuals generate these different types of opinions in very different ways. You can almost hear their mind gears shift when they switch from being careful on research topics to being sloppy on social topics. Most academics have a pretty narrow speciality area, which they know isn’t going to change much, so they do relatively little exploration that isn’t close to their specialty area.

Research opinions are my best contribution to the world, and so are where I should focus my altruistic efforts. (They also give my best chance for fame and glory.) So I try to put less weight on socializing reasons for my opinions, and more weight on the exploration reasons. As long as I see little prospect of my research going anywhere near the abortion or gun control topics, I won’t explore there much. Topics diagnostic of left vs. right ideological positions seem especially unlikely to be places where I could add something useful to what everyone else is saying. But I do explore a wide range of topics that seem plausibly related to areas in which I have specialized, or might specialize. I have specialized in far more different areas than have most academics. And I try to keep myself honest by looking for plausible decisions I might make related to all these topics, though that tends to be hard. If we had more prediction markets this could get much easier, but alas we do not.

Of course if you care less about research, and more about socializing, your priorities could easily differ from mine.

GD Star Rating
loading...
Tagged as: , ,

Light On Dark Matter

I posted recently on the question of what makes up the “dark matter” intangible assets that today are most of firm assets. Someone pointed me to a 2009 paper of answers:

IntangibleShares

[C.I. = ] Computerized information is largely composed of the NIPA series for business investment in computer software. …

[Scientific R&D] is designed to capture innovative activity built on a scientific base of knowledge. … Non-scientific R&D includes the revenues of the non-scientific commercial R&D industry … the costs of developing new motion picture films and other forms of entertainment, investments in new designs, and a crude estimate of the spending for new product development by financial services and insurance firms. …

[Brand equity] includes spending on strategic planning, spending on redesigning or reconfiguring existing products in existing markets, investments to retain or gain market share, and investments in brand names. Expenditures for advertising are a large part of the investments in brand equity, but … we estimated that only about 60 percent of total advertising expenditures were for ads that had long-lasting effects. …

Investment in firm-specific human and structural resources … includes the costs of employer-provided worker training and an estimate of management time devoted to enhancing the productivity of the firm. … business investments in firm-specific human and structural resources through strategic planning, adaptation, reorganization, and employee-skill building. (more; HT Brandon Pizzola)

According to this paper, more firm-specific resources is the biggest story, but more product development is also important. More software is third in importance.

Added 15Apr: On reflection, this seems to suggest that the main story is our vast increase in product variety. That explains the huge increase in investments in product development and firm-specific resources, relative to more generic development and resources.

GD Star Rating
loading...
Tagged as: , ,

Firms Now 5/6 Dark Matter!

Scott Sumner:

We all know that the capital-intensive businesses of yesteryear like GM and US steel are an increasingly small share of the US economy. But until I saw this post by Justin Fox I had no idea how dramatic the transformation had been since 1975:

intangibles

Wow. I had no idea as well. As someone who teaches graduate industrial organization, I can tell you this is HUGE. And I’ve been pondering it for the week since Scott posted the above.

Let me restate the key fact. The S&P 500 are five hundred big public firms listed on US exchanges. Imagine that you wanted to create a new firm to compete with one of these big established firms. So you wanted to duplicate that firm’s products, employees, buildings, machines, land, trucks, etc. You’d hire away some key employees and copy their business process, at least as much as you could see and were legally allowed to copy.

Forty years ago the cost to copy such a firm was about 5/6 of the total stock price of that firm. So 1/6 of that stock price represented the value of things you couldn’t easily copy, like patents, customer goodwill, employee goodwill, regulator favoritism, and hard to see features of company methods and culture. Today it costs only 1/6 of the stock price to copy all a firm’s visible items and features that you can legally copy. So today the other 5/6 of the stock price represents the value of all those things you can’t copy.

So in forty years we’ve gone from a world where it was easy to see most of what made the biggest public firms valuable, to a world where most of that value is invisible. From 1/6 dark matter to 5/6 dark matter. What can possibly have changed so much in less than four decades? Some possibilities:

Error – Anytime you focus on the most surprising number you’ve seen in a long time, you gotta wonder if you’ve selected for an error. Maybe they’ve really screwed up this calculation.

Selection – Maybe big firms used to own factories, trucks etc., but now they hire smaller and foreign firms that own those things. So if we looked at all the firms we’d see a much smaller change in intangibles. One check: over half of Wilshire 5000 firm value is also intangible.

Methods – Maybe firms previously used simple generic methods that were easy for outsiders to copy, but today firms are full of specialized methods and culture that outsiders can’t copy because insiders don’t even see or understand them very well. Maybe, but forty years ago firm methods sure seemed plenty varied and complex.

Innovation – Maybe firms are today far more innovative, with products and services that embody more special local insights, and that change faster, preventing others from profiting by copying. But this should increase growth rates, which we don’t see. And product cycles don’t seem to be faster. Total US R&D spending hasn’t changed much as a GDP fraction, though private spending is up by less than a factor of two, and public spending is down.

Patents – Maybe innovation isn’t up, but patent law now favors patent holders more, helping incumbents to better keep out competitors. Patents granted per year in US have risen from 77K in 1975 to 326K in 2014. But Patent law isn’t obviously so much more favorable. Some even say it has weakened a lot in the last fifteen years.

Regulation – Maybe regulation favoring incumbents is far stronger today. But 1975 wasn’t exact a low regulation nirvana. Could regulation really have changed so much?

Employees – Maybe employees used to jump easily from firm to firm, but are now stuck at firms because of health benefits, etc. So firms gain from being able to pay stuck employees due to less competition for them. But in fact average and median employee tenure is down since 1975.

Advertising – Maybe more ads have created more customer loyalty. But ad spending hasn’t changed much as fraction of GDP. Could ads really be that much more effective? And if they were, wouldn’t firms be spending more on them?

Brands – Maybe when we are richer we care more about the identity that products project, and so are willing to pay more for brands with favorable images. And maybe it takes a long time to make a new favorable brand image. But does it really take that long? And brand loyalty seems to actually be down.

Monopoly – Maybe product variety has increased so much that firm products are worse substitutes, giving firms more market power. But I’m not aware that any standard measures of market concentration (such as HHI) have increased a lot over this period.

Alas, I don’t see a clear answer here. The effect that we are trying to explain is so big that we’ll need a huge cause to drive it. Yes it might have several causes, but each will then have to be big. So something really big is going on. And whatever it is, it is big enough to drive many other trends that people have been puzzling over.

Added 5p: This graph gives the figure for every year from ’73 to ’07.

Added 8p: This post shows debt/equity of S&P500 firms increasing from ~28% to ~42% from ’75 to ’15 . This can explain only a small part of the increase in intangible assets. Adding debt to tangibles in the numerator and denominator gives intangibles going from 13% in ’75 to 59% in ’15.

Added 8a 6Apr: Tyler Cowen emphasizes that accountants underestimate the market value of ordinary capital like equipment, but he neither gives (nor points to) an estimate of the typical size of that effect.

GD Star Rating
loading...
Tagged as: , , , ,

Why Not Sell Cities?

Economists don’t like seeing economic inefficiency, and there’s a lot of it out there to bother us. But some of the very worst we see is in cities; there are many incredible inefficiencies in city land use and in supporting utilities. Which of course makes economists wonder: how could we do better?

Here is one idea that should seem obvious to most economists, but even so I can’t find much discussion of it. So let me try to think it through. What if we auctioned off cities, whole?

Specifically, imagine that we sell all the land and immobile property in an urban region, including all the municipal property, plus all the rights to make urban governance choices. We sell this to a single buyer, who might of course be a consortium. The winning bid would have to be higher than the prior sum of all regional property values, plus a gain of say 50%. The money would be paid to all the prior property owners in proportion to prior property values. (“Prior” should be well before the auction was announced.)

The winning buyer would control all property and governance in this region for a specific time period, say twenty years, after which they’d have to divide the region into at least a thousand property units and auction all them off again individually. Urban governance would revert back to its previous system, except that there’d be a single up-or-down vote on one proposal for a new governance regime offered by this buyer, using previous rules about who can vote in such things.

The idea here is of course to “internalize the externalities”, at least for a while. This single buyer would encompass most of the varying conflicting interests that usually cause existing inefficiencies. And they’d have the power to resolve these conflicts decisively.

OK, now let’s ask: what could go wrong? Well first maybe no bidder could actually collect enough money to make a big enough bid. Or maybe the city inefficiencies aren’t big enough to produce the 50% added value requirement. Or twenty years isn’t long enough to fix the deep problems. Or maybe the plan leaks out too early and pushes up “prior” property values. In these cases, there’d be no change, so not much would be lost.

Another thing that could go wrong would be that larger units of government, like states or nations, might try to tax or regulate this single buyer so much as to take away most of their gains from this process. In expectation of this outcome, no one would bid enough for the city. And again there’d be no change, so little would be lost. So we should try to set this up to avoid such taxation/regulation, but knowing that the downside isn’t terrible if we fail.

Finally, the new city owner might price-discriminate against residents who are especially attached to the city, and so are especially unwilling to leave. Like an old couple whose children all live nearby. Or a big firm with an expensive plant located there. If the new owner cranks up their rent high, these folks might lose on net, even if they are paid a 50% bonus on property values. Of course one might try to set rules to limit price-discrimination, though that might create the over-regulate scenario above. Also, if selling off cities whole became a regular thing, then people may learn to not get too attached to any one city.

I don’t see any of these problems as overwhelming, so I’d endorse trying to do this. But I don’t actually expect many places to try it, because I think most voters whose support would be needed would see their status as threatened. They’d be offended by the very idea of a single powerful actor having strong control over their lives, even if that actor had to pay dearly for the right, and even if they end up better off as a result. So I’d guess it is pride that most goeth before our city falls.

As I’ve mentioned before, people tend to love cities even as they hate firms, mainly because firms tend for-profit, while cities tend democratic. People now mostly accept for-profit firms because the non-profit ones don’t offer attractive jobs or products. Similarly, I’d predict that if there were many for-profit cities most people would be okay with them, as they’d be reluctant to move to worse-run non-profit cities. Also, if almost all firms were non-profit, people might be reluctant to rely on for-profit firms due to their bad public image. Multiple equilibria are possible here, and we may not be in the best one.

Added 9p: Many commentaries seem to fear private city owners evicting undesirable people from the city, in contrast to democratically controlled cities which they see as fountains of altruism toward such people. But see here, here, here, or consider that democracies regularly vote to exclude immigrants who would in fact benefit them materially.

Added 9a:

At the state and local level, government is indeed engaged in redistribution — but it’s redistribution from the poor and the middle class to the wealthy. (more)

GD Star Rating
loading...
Tagged as: ,

Automation vs. Innovation

We don’t yet know how to make computer software that is as flexibly smart as human brains. So when we automate tasks, replacing human workers with computer-guided machines, we usually pay large costs in flexibility and innovation. The new automated processes are harder to change to adapt to new circumstances. Software is harder to change than mental habits, it takes longer to conceive and implement software changes, and such changes require the coordination of larger organizations. The people who write software are further from the task, and so are less likely than human workers to notice opportunities for improvement.

This is a big reason why it will take automation a lot longer to replace human workers than many recent pundits seem to think. And this isn’t just abstract theory. For example, some of the most efficient auto plants are the least automated. Read more about Honda auto plants:

[Honda] is one of the few multinational companies that has succeeded at globalization. Their profit margins are high in the auto industry. Almost everywhere they go — over 5 percent profit margins. In most markets, they consistently are in the top 10 of specific models that sell. They’ve never lost money. They’ve been profitable every year. And they’ve been around since 1949. …

Soichiro Honda, the founder of the company … was one of the world’s greatest engineers. And yet he never graduated college. He believed that hands-on work as an engineer is what it takes to be a great manufacturer. … Continue reading "Automation vs. Innovation" »

GD Star Rating
loading...
Tagged as:

Irreducible Detail

Our best theories vary in generality. Some theories are very general, but most are more context specific. Putting all of our best theories together usually doesn’t let us make exact predictions on most variables of interest. We often express this fact formally in our models via “noise,” which represents other factors that we can’t yet predict.

For each of our theories there was a point in time when we didn’t have it yet. Thus we expect to continue to learn more theories, which will let us make more precise predictions. And so it might seem like we can’t constrain our eventual power of prediction; maybe we will have powerful enough theories to predict everything exactly.

But that doesn’t seem right either. Our best theories in many areas tell us about fundamental limits on our prediction abilities, and thus limits on how powerful future simple general theories could be. For example:

  • Thermodynamics – We can predict some gross features of future physical states, but the entropy of a system sets a very high (negentropy) cost to learn precise info about the state of that system. If thermodynamics is right, there will never be a general theory to let one predict future states more cheaply than this.
  • Finance – Finance theory has identified many relevant parameters to predict the overall distribution of future assets returns. However, finance theory strongly suggests that it is usually very hard to predict details of the specific future returns of specific assets. The ability to do so would be worth such a huge amount that there just can’t be many who often have such an ability. The cost to gain such an ability must usually be more than the gains from trading it.
  • Cryptography – A well devised code looks random to an untrained eye. As there are a great many possible codes, and a great many ways to find weaknesses in them, it doesn’t seem like there could be any general way to break all codes. Instead code breaking is a matter of knowing lots of specific things about codes and ways they might be broken. People use codes when they expect the cost of breaking them to be prohibitive, and such expectations are usually right.
  • Innovation – Economic theory can predict many features of economies, and of how economies change and grow. And innovation contributes greatly to growth. But economists also strongly expect that the details of particular future innovations cannot be predicted except at a prohibitive cost. Since knowing of innovations ahead of time can often be used for great private profit, and would speed up the introduction of those innovations, it seems that no cheap-to-apply simple general theories can exist which predict the details of most innovations well ahead of time.
  • Ecosystems – We understand some ways in which parameters of ecosystems correlate with their environments. Most of these make sense in terms of general theories of natural selection and genetics. However, most ecologists strongly suspect that the vast majority of the details of particular ecosystems and the species that inhabit them are not easily predictable by simple general theories. Evolution says that many details will be well matched to other details, but to predict them you must know much about the other details to which they match.

In thermodynamics, finance, cryptography, innovations, and ecosystems, we have learned that while there are many useful generalities, the universe is also chock full of important irreducible incompressible detail. As this is true at many levels of abstraction, I would add this entry to the above list:

  • Intelligence – General theories tell us what intelligence means, and how it can generalize across tasks and contexts. But most everything we’ve learned about intelligence suggests that the key to smarts is having many not-fully-general tools. Human brains are smart mainly by containing many powerful not-fully-general modules, and using many modules to do each task. These modules would not work well in all possible universes, but they often do in ours. Ordinary software also gets smart by containing many powerful modules. While the architecture that organizes those modules can make some difference, that difference is mostly small compared to having more better modules. In a world of competing software firms, most ways to improve modules or find new ones cost more than the profits they’d induce.

If most value in intelligence comes from the accumulation of many expensive parts, there may well be no powerful general theories to be discovered to revolutionize future AI, and give an overwhelming advantage to the first project to discover them. Which is the main reason that I’m skeptical about AI foom, the scenario where an initially small project quickly grows to take over the world.

Added 7p: Peter McCluskey has thoughtful commentary here.

GD Star Rating
loading...
Tagged as: , , , ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Proverbs As Insight

Don Quixote’s lower class sidekick Sancho Panza quoted proverbs to excess. Among the intellectuals I know that class association continues – proverbs may help lesser minds, but we elites “think for ourselves.” Proverbs are also associated with older beliefs and attitudes, and so are seen as more politically conservative, and less relevant in our new changed world. Since the world today changes faster, has become less politically conservative, and has more educated folks who aspire to look more intellectual, you might think that we use proverbs less today than we did in 1800.

On the other hand, you might think of proverbs as well-packaged nuggets of useful insight. As the world continues to grow by accumulating insight and innovation, not only do we collect more gadgets, formulas, and words, we should also be collecting more useful proverbs. From this perspective, we should expect people to use more proverbs today.

To get some data on this, I found some lists of famous proverbs, and used Google books ngram viewer to plot their usage in books since 1800:

ProverbUsage

ProverbUsage2Overall usage seems to have gone up, not down. But two considerations complicate this interpretation. One is that I started from lists of proverbs famous today, instead of proverbs famous in 1800. The other is that the typical book reader and author today may be more lower class than they were in 1800, with books catering more to their proverb-friendly tastes.

I hope someone can get better data on this. Even so, maybe we should tentatively expect future folk to talk and write more like ole Sancho Panza, with many more proverbs.

GD Star Rating
loading...
Tagged as: