Tag Archives: Innovation

Cowen On Complacency

A week ago I summarized and critiqued five books wherein Peter Turchin tries to document and explain two key historical cycles: a several century cycle of empires rising and falling, and a fifty year alternating-generations cycle of instability during empire low points. In his latest book, Turchin tentatively tries to apply his theories to predict the U.S. near future.

In his new book The Complacent Class, Tyler Cowen also takes a bigger-than-usual historical perspective, invokes cycles, and predicts the U.S. near future. But instead of applying a theory abstracted from thousands of years of data, Cowen mainly just details many particular trends in the U.S. over the last half century. David Brooks summarizes:

Cowen shows that in sphere after sphere, Americans have become less adventurous and more static.

The book page summarizes:

Our willingness to move, take risks, and adapt to change have produced a dynamic economy. .. [But] Americans today .. are working harder than ever to avoid change. We’re moving residences less, marrying people more like ourselves and choosing our music and our mates based on algorithms. .. This cannot go on forever. We are postponing change,.. but ultimately this will make change, when it comes, harder. .. eventually lead to a major fiscal and budgetary crisis.

In each particular area, Cowen documents specific trends, and he often offers specific local theories that could have led one to expect such trends. For example, he says fewer geographic moves are predicted from fewer job moves, and fewer job moves are predicted by workers being older. But when it comes to the question of why all these particular trends with their particular causes happen to create a consistent overall trend toward complacency, Cowen seems to me coy. Let me discuss three passages where I find that he at least touches on general accounts. Continue reading "Cowen On Complacency" »

GD Star Rating
Tagged as: , ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
Tagged as: , , ,

Trump, Political Innovator

People are complicated. Not only can each voter be described by a very high dimensional space of characteristics, the space of possible sets of voters is even larger. Because if this, coalition politics is intrinsically complex, making innovation possible and relevant.

That is, at any one time the existing political actors in some area use an existing set of identified political coalitions, and matching issues that animate them. However, these existing groups are but a tiny part of the vast space of possible groups and coalitions. And even if one had exhaustively searched the entire space and found the very best options, over time those would become stale, making new better options possible.

As usual in innovation, each actor can prefer to free-ride on the efforts of others, and wait to make use of new coalitions that others have worked to discover. But some political actors will more explore new possible coalitions and issues. Most will probably try to for a resurgence of old combinations that worked better in the past than they have recently. But some will try out more truly new combinations.

We expect those who innovate politically to differ in predictable ways. They will tend to be outsiders looking for a way in, and their personal preferences will less well match existing standard positions. Because innovators must search the space of possibilities, their positions and groups will be vaguer and vary more over time, and they will less hew to existing rules and taboos on such things. They will more often work their crowds on the fly to explore their reactions, relative to sticking to prepared speeches. Innovators will tend to arise more when power is more up for grabs, with many contenders. Successful innovation tends to be a surprise, and is more likely the longer it has been since a major innovation, or “realignment,” with more underlying social change during that period. When an innovator finds a new coalition to represent, that coalition will be less attracted to this politician’s personal features and more to the fact that someone is offering to represent them.

The next US president, Donald Trump, seems to be a textbook political innovator. During a period when his party was quite up for grabs with many contenders, he worked his crowds, taking a wide range of vague positions that varied over time, and often stepped over taboo lines. In the process, he surprised everyone by discovering a new coalition that others had not tried to represent, a group that likes him more for this representation than his personal features.

Many have expressed great anxiety about Trump’s win, saying that he is is bad overall because he induces greater global and domestic uncertainly. In their mind, this includes a higher chances of wars, coups, riots, collapse of democracy, and so on. But overall these seem to be generic consequences of political innovation. Innovation in general is disruptive and costly in the short run, but can aide adaptation in the long run.

So you can dislike Trump for two very different reasons, First, you can dislike innovation on the other side of the political spectrum, as you see that coming at the expense of your side. Or, or you can dislike political innovation in general. But if innovation is the process of adapting to changing conditions, it must be mostly a question of when, not if. And less frequent innovations are probably bigger changes, which is probably more disruptive overall.

So what you should really be asking is: what were the obstacles to smaller past innovations in Trump’s new direction? And how can we reduce such obstacles?

GD Star Rating
Tagged as: ,

Needed: Social Innovation Adaptation

This is the point during the electoral cycle when people are most willing to consider changing political systems. The nearly half of voters whose candidates just lost are now most open to changes that might have let their side win. But even in an election this acrimonious, that interest is paper thin, and blows away in the slightest breeze. Because politics isn’t about policy – what we really want is to feel part of a political tribe via talking with them about the same things. So if the rest of your tribe isn’t talking about system change, you don’t want to talk about that either.

So I want to tell or remind everyone that if you actually did care about outcomes instead of feeling part of a big tribe, large social gains wait untapped in better social institutions. In particular, very large gains await detailed field trials of institutional innovations. Let me explain.

Long ago when I was a physicist turned computer researcher who started to study economics, I noticed that it seemed far easier to design new better social institutions than to design new better computer algorithms or physical devices. This helped inspire me to switch to economics.

Once I was in graduate program with a thesis advisor who specialized in institution/mechanism design, I seemed to see a well established path for social innovations, from vague intuitions to theoretical analysis to lab experiments to simplified field experiments to complex practice. Of course as with most innovation paths, as costs rose along the path most candidates fell by the wayside. And yes, designing social institutions was harder that it looked at first, though it still seems easier than for computers and physical devices.

But it took me a long time to learn that this path is seriously broken near the end. Organizations with real problems do in fact sometimes allow simplified field trials of institutional alternatives that social scientists have proposed, but only in a very limited range of areas. And usually they mainly just do this to affiliate with prestigious academics; most aren’t actually much interested in adopting better institutions. (Firms mostly outsource social innovation to management consultants, who don’t actually endorse much. Yes startups explore some innovations, but relatively few.)

So by now academics have accumulated a large pile of promising institution ideas, many of which have supporting theory, lab experiments, and even simplified field trials. In addition, academics have even larger literatures that measure and theorize about existing social institutions. But even after promising results from simplified field experiments, much work usually remains to adapt such new proposals to the many complex details of existing social worlds. Complex worlds can’t usefully digest abstract academic ideas without such adaptation.

And the bottom line is that we very much lack organizations willing to do that work for social innovations. Organizations do this work more often for computer or device innovations, and sometimes social innovations get smuggled in via that route. A few organizations sometimes work on social innovations directly, but mostly to affiliate with prestigious academics, so if you aren’t such an academic you mostly can’t participate.

This is the point where I’ve found myself stuck with prediction & decision markets. There has been prestige and funding to prove theorems, do lab experiments, analyze field datasets, and even do limited simplified field trials. But there is little prestige or funding for that last key step of adapting academic ideas to complex social worlds. Its hard to apply rigorous general methods in such efforts, and so hard to publish on that academically. (Even blockchain folks interested have mainly been writing general code, not working with messy organizations.)

So if you want to make clubs, firms, cities, nations, and the world more effective and efficient, a highly effective strategy is to invest in widening the neglected bottleneck of the social innovation pathway. Get your organization to work on some ideas, or pay other organizations to work on them. Yes some ideas can only be tried out at large scales, but for most there are smaller scale analogues that it makes sense to work on first. I stand ready to help organizations do this for prediction & decision markets. But alas to most organizations I lack sufficient prestige for such associations.

GD Star Rating
Tagged as: ,

No Short Em Age

The basic premise of my book is that the next big revolution on the scale of the farming and industrial revolutions will come from human level artificial intelligence in the form of brain emulations (ems). Yes, because people have asked I’ve estimated that this will happen within roughly a century, but that estimate isn’t central. The key is that even if ems take many centuries, they will still come before achieving human level artificial intelligence via the usual methods (UAI – via hand-coded algorithms including statistics), and before other social disruptions of this magnitude.

I’ve argued that this premise is plausible because it is hard to imagine social disruptions as big as AI, and because at past rates of progress UAI should take centuries, while ems look like they’ll be ready sooner. Yes, some people are so impressed by recent UAI demos that they think this time is different, so that we will now see an unprecedented burst of progress all the way to full UAI within a decade or two. But I remember past demos that were similarly impressive relative to then-current abilities.

Some people think the basic premise of my book is too weird, while others see it as not weird enough. This post addresses the most common objection I’ve heard from this second group: that even if ems come first, the usual AI will appear a few hours later, making the age of em too short to be worth much consideration.

Now there is certainly one way big ems make full UAI come faster: by speeding up overall economic growth. I’ve suggested the em economy might double every month or faster, and while some doubt this, few who think my book not weird enough are among them.

Since the economy mainly grows today via innovation, our ladder of growth is basically a ladder of overall innovation. We only double the economy when we have on averaged doubled our abilities across all economic sectors. So if the relative rates of economic growth and innovation in different sectors stay the same, then speeding up economic growth means speeding up the rate of progress toward full UAI. (While some expect a larger economy to innovate faster because it has more resources, the steady economic growth rates we’ve seen suggest there are contrary forces, such as picking the low hanging fruit of research first.)

For example, at past rates of UAI progress it should take two to four centuries to reach human level abilities in the typical UAI subfield, and thus even longer in most subfields. Since the world economy now doubles roughly every fifteen years, that comes to twenty doublings in three centuries. If ems show up halfway from now to full human level usual AI, there’d still be ten economic doublings to go, which would then take ten months if the economy doubled monthly. Which is definitely faster UAI progress.

However, ten doublings of the economy can encompass a whole era worthy of study. I’ve argued that ems would typically run fast enough to fit a subjective career of a century or more within an economic doubling time, so that their early career training can remain relevant over a whole career. So ten doublings is at least ten subjective centuries, which is plenty of time for lots of cultural and social change. A whole age of change, in fact.

Some argue that the existence of ems would speed up innovation in general, because ems are smarter and innovation benefits more from smarts than does typical production. But even if true, this doesn’t change the relative rate of innovation in UAI relative to other areas.

Some argue that ems speed up UAI progress in particular, via being able to inspect brain circuits in detail and experiment with variations. But as it can be very hard to learn how to code just from inspecting object spaghetti code from other coders, I’m skeptical that this effect could speed up progress anything like a factor of two, which would be where two (logarithmic) steps on the UAI ladder of progress are now jumped when single steps are on average jumped elsewhere. And even then there’d still be at least five economic doublings in the em era, giving at least five subjective centuries of cultural change.

And we know of substantial contrary effects. First, UAI progress seems driven in part by computer hardware progress, which looks like it will be slower in the coming decades than it has in past decades, relative to other areas of innovation. More important, a big part of em era growth can be due to raw physical growth in production, via making many more ems. If half of em economic growth is due to this process then the em economy makes two (logarithmic) steps of economic growth for every step on the ladder of innovation progress, turning ten ladder steps into twenty doublings. A long age of em.

Some argue that the availability of ems will greatly speed the rate of UAI innovation relative to other rates of innovation. They say things like:

When ems are cheap, you could have a million top (e.g., 100 times average) quality UAI research ems each running at a million times human speed. Since until now we’ve only had a thousand average quality UAI researchers at any one time, UAI progress could be a hundred billion times faster, making what would have taken three centuries now take a tenth of a second. The prize of getting to full UAI first would induce this investment.

There are just so many things wrong with this statement.

First, even if human speed ems are cheap, mega-ems cost at least a million times as much. A million mega-ems are as productive as trillion humans, times whatever factor by which the typical human-speed em is more productive than a typical human. The em economy would have to have grown a whole lot before it is even possible to devote that level of resources to UAI research. So there can be a whole em era before that point.

Second, this same approach seems equally able to speed up progress in any innovation area that isn’t strongly limited by physical process rates. Areas that only moderately depend on physical rates can spend more to compensate, so that their innovation rates slow only modestly. If only a modest fraction of innovation areas were substantially limited by physical rates, that would only speed up UAI progress by a modest factor relative to overall economic growth.

Third, just because some researchers publish many more academic papers than others doesn’t at all mean that young copies of those researchers assigned to other research areas would have published similarly. Ex ante expected researcher quality varies a lot less than ex post observed research publications. Yes, people often vary by larger factors in their ability to do pure math, relative to other abilities, but pure math contributes only a small fraction to overall innovation.

Fourth, it is well known that most innovation doesn’t come from formal research, and that innovations in different areas help each other. Economists have strong general reasons to expect diminishing returns to useful innovation from adding more researchers. Yes, if you double the number of researchers in one area you’ll probably get twice as many research papers in that area, but that is very different from twice as getting much useful progress.

As I mention in my book, in some cases we’ve actually measured how research progress varies with the number of researchers, and it looks more like a square root dependence. In addition, if innovation rates were linear in the number of formal researchers, then given the tiny fraction of such researchers today we’d have to be vastly underinvesting in them, and so nations who invest more in formal research should expect to see much higher rates of economic growth. Yet we don’t actually see much of a relation between economic growth and spending on formal research. (Yes studies vary, so there could be a modest, but not a huge, effect.)

So, in sum, we should expect that useful UAI innovation doesn’t mostly come from formal research, and so doubling the number of UAI researchers, or doubling their speed, doesn’t remotely double useful innovation. We aren’t vastly underinvesting in formal research, and so future parties can’t expect to achieve huge gains by making a huge new investment there. We can expect to see modest gain in UAI innovation, relative to today and to other innovation areas, from an ability to inspect and experiment with ems, and from not being very limited by physical process rates. But these give less than a factor of two, and we should see a factor of two in the other direction from slowing hardware gains and from innovation mattering less for economic growth.

Thus we should expect many doublings of the em era after ems and before human level UAI, resulting in many centuries of subjective cultural change for typical ems. Giving an em era that is long enough to be worth considering. If you want to study whatever comes after the em era, understanding the em era should help.

GD Star Rating
Tagged as: , ,

How Different AGI Software?

My ex-co-blogger Eliezer Yudkowsky recently made a Facebook post saying that recent AI Go progress confirmed his predictions from our foom debate. He and I then discussed this there, and I thought I’d summarize my resulting point of view here.

Today an individual firm can often innovate well in one of its products via a small team that keeps its work secret and shares little with other competing teams. Such innovations can be lumpy in the sense that gain relative to effort varies over a wide range, and a single innovation can sometimes make a big difference to product value.

However big lumps are rare; typically most value gained is via many small lumps rather than a few big ones. Most innovation comes from detailed practice, rather than targeted research, and abstract theory contributes only a small fraction. Innovations vary in their generality, and this contributes to the variation in innovation lumpiness. For example, a better washing machine can better wash many kinds of clothes.

If instead of looking at individual firms we look at nations as a whole, the picture changes because a nation is an aggregation of activities across a great many firm teams. While one firm can do well with a secret innovation team that doesn’t share, a big nation would hurt itself a lot by closing its borders to stop sharing with other nations. Single innovations make a much smaller difference to nations as a whole then they do to individual products. So nations grow much more steadily than do firms.

All of these patterns apply not just to products in general, but also to the subcategory of software. While some of our most general innovations may be in software, most software innovation is still made of many small lumps. Software that is broadly capable, such as a tool-filled operating system, is created by much larger teams, and particular innovations make less of a difference to its overall performance. Most software is created via tools that are shared with many other teams of software developers.

From an economic point of view, a near-human-level “artificial general intelligence” (AGI) would be a software system with a near-human level competence across almost the entire range of mental tasks that matter to an economy. This is a wide range, much more like scope of abilities found in a nation than those found in a firm. In contrast, an AI Go program has a far more limited range of abilities, more like those found in typical software products. So even if the recent Go program was made by a small team and embodies lumpy performance gains, it is not obviously a significant outlier relative to the usual pattern in software.

It seems to me that the key claim made by Eliezer Yudkowsky, and others who predict a local foom scenario, is that our experience in both ordinary products in general and software in particular is misleading regarding the type of software that will eventually contribute most to the first human-level AGI. In products and software, we have observed a certain joint distribution over innovation scope, cost, value, team size, and team sharing. And if that were also the distribution behind the first human-level AGI software, then we should predict that it will be made via a great many people in a great many teams, probably across a great many firms, with lots of sharing across this wide scope. No one team or firm would be very far in advance of the others.

However, the key local foom claim is that there is some way for small teams that share little to produce innovations with far more generality and lumpiness than these previous distributions suggests, perhaps due to being based more on math and basic theory. This would increase the chances that a small team could create a program that grabs a big fraction of world income, and keeps that advantage for an important length of time.

Presumably the basis for this claim is that some people think they see a different distribution among some subset of AI software, perhaps including machine learning software. I don’t see it yet, but the obvious way for them to convince skeptics like me is to create and analyze a formal dataset of software projects and innovations. Show us a significantly-deviating subset of AI programs with more economic scope, generality, and lumpiness in gains. Statistics from such an analysis could let us numerically estimate the chances of a single small team encompassing a big fraction of AGI software power and value.

That is, we might estimate the chances of local foom. Which I’ve said isn’t zero; I’ve instead just suggested that foom has gained too much attention relative to its importance.

GD Star Rating
Tagged as: , ,

Why Have Opinions?

I just surprised some people here at a conference by saying that I don’t have opinions on abortion or gun control. I have little use for such opinions, and so haven’t bothered to form them. Since that attitude seems to be unusual among my intellectual peers, let me explain myself.

I see four main kinds of reasons to have opinions on subjects:

  • Decisions – Sometimes I need to make concrete decisions where the best choice depends on particular key facts or values. In such cases I am forced to have opinions on those subjects, in order to make good decisions. I may well just adopt, without much reflection, the opinions of some standard expert source. I have to make a lot of decisions and don’t have much time to reflect. But even so, I must have an opinion. And my incentives here tend to be toward having true opinions.
  • Socializing – A wide range of topics come up when talking informally with others, and people tend to like you to express opinions on at least some substantial subset of those topics. They typically aren’t very happy if you explain that you just adopted the opinion of some standard expert source without reflection, and so we are encouraged to “think for ourselves” to generate such opinions. Here my incentives are to have opinions that others find interesting or loyal, which is less strongly (but not zero) correlated with truth.
  • Research – As a professional intellectual, I specialize in particular topics. On those topics I generate opinions together with detailed supporting justifications for those opinions. I am evaluated on the originality, persuasiveness, and impressiveness of these opinions and justifications. These incentives are somewhat more strongly, but still only somewhat, correlated with truth.
  • Exploration – I’m not sure what future topics to research, and so continually explore a space of related topics which seem like they might have the potential to become promising research areas for me. Part of that process of exploration involves generating tentative opinions and justifications. Here it is even less important that these opinions be true than they help reveal interesting, neglected, areas especially well-suited to my particular skills and styles.

Most topics that are appropriate for research have little in the way of personal decision impact. So intellectuals focus more on research reasons for such topics. Most intellectuals also socialize a lot, so they also generate opinions for social reasons. Alas most intellectuals generate these different types of opinions in very different ways. You can almost hear their mind gears shift when they switch from being careful on research topics to being sloppy on social topics. Most academics have a pretty narrow speciality area, which they know isn’t going to change much, so they do relatively little exploration that isn’t close to their specialty area.

Research opinions are my best contribution to the world, and so are where I should focus my altruistic efforts. (They also give my best chance for fame and glory.) So I try to put less weight on socializing reasons for my opinions, and more weight on the exploration reasons. As long as I see little prospect of my research going anywhere near the abortion or gun control topics, I won’t explore there much. Topics diagnostic of left vs. right ideological positions seem especially unlikely to be places where I could add something useful to what everyone else is saying. But I do explore a wide range of topics that seem plausibly related to areas in which I have specialized, or might specialize. I have specialized in far more different areas than have most academics. And I try to keep myself honest by looking for plausible decisions I might make related to all these topics, though that tends to be hard. If we had more prediction markets this could get much easier, but alas we do not.

Of course if you care less about research, and more about socializing, your priorities could easily differ from mine.

GD Star Rating
Tagged as: , ,

Light On Dark Matter

I posted recently on the question of what makes up the “dark matter” intangible assets that today are most of firm assets. Someone pointed me to a 2009 paper of answers:


[C.I. = ] Computerized information is largely composed of the NIPA series for business investment in computer software. …

[Scientific R&D] is designed to capture innovative activity built on a scientific base of knowledge. … Non-scientific R&D includes the revenues of the non-scientific commercial R&D industry … the costs of developing new motion picture films and other forms of entertainment, investments in new designs, and a crude estimate of the spending for new product development by financial services and insurance firms. …

[Brand equity] includes spending on strategic planning, spending on redesigning or reconfiguring existing products in existing markets, investments to retain or gain market share, and investments in brand names. Expenditures for advertising are a large part of the investments in brand equity, but … we estimated that only about 60 percent of total advertising expenditures were for ads that had long-lasting effects. …

Investment in firm-specific human and structural resources … includes the costs of employer-provided worker training and an estimate of management time devoted to enhancing the productivity of the firm. … business investments in firm-specific human and structural resources through strategic planning, adaptation, reorganization, and employee-skill building. (more; HT Brandon Pizzola)

According to this paper, more firm-specific resources is the biggest story, but more product development is also important. More software is third in importance.

Added 15Apr: On reflection, this seems to suggest that the main story is our vast increase in product variety. That explains the huge increase in investments in product development and firm-specific resources, relative to more generic development and resources.

GD Star Rating
Tagged as: , ,

Firms Now 5/6 Dark Matter!

Scott Sumner:

We all know that the capital-intensive businesses of yesteryear like GM and US steel are an increasingly small share of the US economy. But until I saw this post by Justin Fox I had no idea how dramatic the transformation had been since 1975:


Wow. I had no idea as well. As someone who teaches graduate industrial organization, I can tell you this is HUGE. And I’ve been pondering it for the week since Scott posted the above.

Let me restate the key fact. The S&P 500 are five hundred big public firms listed on US exchanges. Imagine that you wanted to create a new firm to compete with one of these big established firms. So you wanted to duplicate that firm’s products, employees, buildings, machines, land, trucks, etc. You’d hire away some key employees and copy their business process, at least as much as you could see and were legally allowed to copy.

Forty years ago the cost to copy such a firm was about 5/6 of the total stock price of that firm. So 1/6 of that stock price represented the value of things you couldn’t easily copy, like patents, customer goodwill, employee goodwill, regulator favoritism, and hard to see features of company methods and culture. Today it costs only 1/6 of the stock price to copy all a firm’s visible items and features that you can legally copy. So today the other 5/6 of the stock price represents the value of all those things you can’t copy.

So in forty years we’ve gone from a world where it was easy to see most of what made the biggest public firms valuable, to a world where most of that value is invisible. From 1/6 dark matter to 5/6 dark matter. What can possibly have changed so much in less than four decades? Some possibilities:

Error – Anytime you focus on the most surprising number you’ve seen in a long time, you gotta wonder if you’ve selected for an error. Maybe they’ve really screwed up this calculation.

Selection – Maybe big firms used to own factories, trucks etc., but now they hire smaller and foreign firms that own those things. So if we looked at all the firms we’d see a much smaller change in intangibles. One check: over half of Wilshire 5000 firm value is also intangible.

Methods – Maybe firms previously used simple generic methods that were easy for outsiders to copy, but today firms are full of specialized methods and culture that outsiders can’t copy because insiders don’t even see or understand them very well. Maybe, but forty years ago firm methods sure seemed plenty varied and complex.

Innovation – Maybe firms are today far more innovative, with products and services that embody more special local insights, and that change faster, preventing others from profiting by copying. But this should increase growth rates, which we don’t see. And product cycles don’t seem to be faster. Total US R&D spending hasn’t changed much as a GDP fraction, though private spending is up by less than a factor of two, and public spending is down.

Patents – Maybe innovation isn’t up, but patent law now favors patent holders more, helping incumbents to better keep out competitors. Patents granted per year in US have risen from 77K in 1975 to 326K in 2014. But Patent law isn’t obviously so much more favorable. Some even say it has weakened a lot in the last fifteen years.

Regulation – Maybe regulation favoring incumbents is far stronger today. But 1975 wasn’t exact a low regulation nirvana. Could regulation really have changed so much?

Employees – Maybe employees used to jump easily from firm to firm, but are now stuck at firms because of health benefits, etc. So firms gain from being able to pay stuck employees due to less competition for them. But in fact average and median employee tenure is down since 1975.

Advertising – Maybe more ads have created more customer loyalty. But ad spending hasn’t changed much as fraction of GDP. Could ads really be that much more effective? And if they were, wouldn’t firms be spending more on them?

Brands – Maybe when we are richer we care more about the identity that products project, and so are willing to pay more for brands with favorable images. And maybe it takes a long time to make a new favorable brand image. But does it really take that long? And brand loyalty seems to actually be down.

Monopoly – Maybe product variety has increased so much that firm products are worse substitutes, giving firms more market power. But I’m not aware that any standard measures of market concentration (such as HHI) have increased a lot over this period.

Alas, I don’t see a clear answer here. The effect that we are trying to explain is so big that we’ll need a huge cause to drive it. Yes it might have several causes, but each will then have to be big. So something really big is going on. And whatever it is, it is big enough to drive many other trends that people have been puzzling over.

Added 5p: This graph gives the figure for every year from ’73 to ’07.

Added 8p: This post shows debt/equity of S&P500 firms increasing from ~28% to ~42% from ’75 to ’15 . This can explain only a small part of the increase in intangible assets. Adding debt to tangibles in the numerator and denominator gives intangibles going from 13% in ’75 to 59% in ’15.

Added 8a 6Apr: Tyler Cowen emphasizes that accountants underestimate the market value of ordinary capital like equipment, but he neither gives (nor points to) an estimate of the typical size of that effect.

GD Star Rating
Tagged as: , , , ,

Why Not Sell Cities?

Economists don’t like seeing economic inefficiency, and there’s a lot of it out there to bother us. But some of the very worst we see is in cities; there are many incredible inefficiencies in city land use and in supporting utilities. Which of course makes economists wonder: how could we do better?

Here is one idea that should seem obvious to most economists, but even so I can’t find much discussion of it. So let me try to think it through. What if we auctioned off cities, whole?

Specifically, imagine that we sell all the land and immobile property in an urban region, including all the municipal property, plus all the rights to make urban governance choices. We sell this to a single buyer, who might of course be a consortium. The winning bid would have to be higher than the prior sum of all regional property values, plus a gain of say 50%. The money would be paid to all the prior property owners in proportion to prior property values. (“Prior” should be well before the auction was announced.)

The winning buyer would control all property and governance in this region for a specific time period, say twenty years, after which they’d have to divide the region into at least a thousand property units and auction all them off again individually. Urban governance would revert back to its previous system, except that there’d be a single up-or-down vote on one proposal for a new governance regime offered by this buyer, using previous rules about who can vote in such things.

The idea here is of course to “internalize the externalities”, at least for a while. This single buyer would encompass most of the varying conflicting interests that usually cause existing inefficiencies. And they’d have the power to resolve these conflicts decisively.

OK, now let’s ask: what could go wrong? Well first maybe no bidder could actually collect enough money to make a big enough bid. Or maybe the city inefficiencies aren’t big enough to produce the 50% added value requirement. Or twenty years isn’t long enough to fix the deep problems. Or maybe the plan leaks out too early and pushes up “prior” property values. In these cases, there’d be no change, so not much would be lost.

Another thing that could go wrong would be that larger units of government, like states or nations, might try to tax or regulate this single buyer so much as to take away most of their gains from this process. In expectation of this outcome, no one would bid enough for the city. And again there’d be no change, so little would be lost. So we should try to set this up to avoid such taxation/regulation, but knowing that the downside isn’t terrible if we fail.

Finally, the new city owner might price-discriminate against residents who are especially attached to the city, and so are especially unwilling to leave. Like an old couple whose children all live nearby. Or a big firm with an expensive plant located there. If the new owner cranks up their rent high, these folks might lose on net, even if they are paid a 50% bonus on property values. Of course one might try to set rules to limit price-discrimination, though that might create the over-regulate scenario above. Also, if selling off cities whole became a regular thing, then people may learn to not get too attached to any one city.

I don’t see any of these problems as overwhelming, so I’d endorse trying to do this. But I don’t actually expect many places to try it, because I think most voters whose support would be needed would see their status as threatened. They’d be offended by the very idea of a single powerful actor having strong control over their lives, even if that actor had to pay dearly for the right, and even if they end up better off as a result. So I’d guess it is pride that most goeth before our city falls.

As I’ve mentioned before, people tend to love cities even as they hate firms, mainly because firms tend for-profit, while cities tend democratic. People now mostly accept for-profit firms because the non-profit ones don’t offer attractive jobs or products. Similarly, I’d predict that if there were many for-profit cities most people would be okay with them, as they’d be reluctant to move to worse-run non-profit cities. Also, if almost all firms were non-profit, people might be reluctant to rely on for-profit firms due to their bad public image. Multiple equilibria are possible here, and we may not be in the best one.

Added 9p: Many commentaries seem to fear private city owners evicting undesirable people from the city, in contrast to democratically controlled cities which they see as fountains of altruism toward such people. But see here, here, here, or consider that democracies regularly vote to exclude immigrants who would in fact benefit them materially.

Added 9a:

At the state and local level, government is indeed engaged in redistribution — but it’s redistribution from the poor and the middle class to the wealthy. (more)

GD Star Rating
Tagged as: ,