Tag Archives: Innovation

Conditional Harberger Tax Games

Baron Georges-Eugène Haussmann … transformed Paris with dazzling avenues, parks and other lasting renovations between 1853 and 1870. … Haussmann… resolved early on to pay generous compensation to [Paris] property owners, and he did. … [He] hoped to repay the larger loans he obtained from the private sector by capturing some of the increased value of properties lining along the roads he built. … [He] did confiscate properties on both sides of his new thoroughfares, and he had their edifices rebuilt. … Council of State … forced him to return these beautifully renovated properties to their original owners, who thus captured all of their increased value. (more)

In my last post I described abstractly how a system of conditional Harberger taxes (CHT) could help deal with zoning and other key city land use decisions. In this post, let me say a bit more about the behaviors I think we’d actually see in such a system. (I’m only considering here such taxes for land and property tied to land.)

First, I while many property owners would personally manage their official declared property values, many others would have them set by an agent or an app. Agents and apps may often come packaged with insurance against various things that can go wrong, such as losing one’s property.

Second, yes, under CHT, sometimes people would (be paid well to) lose their property. This would almost always be because someone else credibly demonstrated that they expect to gain more value from it. Even if owners strategically or mistakenly declare values too low, the feature I suggested of being able to buy back a property by paying a 1% premium would ensure that pricing errors don’t cause property misallocations. The highest value uses of land can change, and one of the big positive features of this system is that it makes the usage changes that should then result easier to achieve. In my mind that’s a feature, not a bug. Yes, owners could buy insurance against the risk of losing a property, though that needn’t result in getting their property back.

In the ancient world, it was common for people to keep the same marriage, home, neighbors, job, family, and religion for their entire life. In the modern world, in contrast, we expect many big changes during our lifetimes. While we can mostly count on family and religion remaining constant, we must accept bigger chances of change to marriages, neighbors, and jobs. Even our software environments change in ways we can’t control when new versions are issued. Renters today accept big risks of home changes, and even home “owners” face big risks due to job and financial risks. All of which seems normal and reasonable. Yes, a few people seem quite obsessed with wanting absolute guarantees on preservation of old property usage, but I can’t sympathize much with such fetishes for inefficient stasis. Continue reading "Conditional Harberger Tax Games" »

GD Star Rating
loading...
Tagged as: , ,

Distant Future Tradeoffs

Over the last day on Twitter, I ran three similar polls. One asked:

Software design today faces many tradeoffs, e.g., getting more X costs less Y, or vice versa. By comparison, will distant future tradeoffs be mostly same ones, about as many but very different ones, far fewer (so usually all good features X,Y are feasible together), or far more?

Four answers were possible: mostly same tradeoffs, as many but mostly new, far fewer tradeoffs, and far more tradeoffs. The other two polls replaced “Software” with “Physical Device” and “Social Institution.”

I now see these four answers as picking out four future scenarios. A world with fewer tradeoffs is Utopian, where you can more get everything you want without having to give up other things. In contrast, a world with many more tradeoffs is more Complex. A world where most of the tradeoffs are like those today is Familiar. And a world where the current tradeoffs are replaced by new ones is Radical.  Using these terms, here are the resulting percentages:

The polls got from 105 to 131 responses each, with an average entry percentage of 25%, so I’m willing to believe differences of 10% or more. The most obvious results here are that only a minority foresee a familiar future in any area, and answers vary greatly; there is little consensus on which scenarios are more likely.

Beyond that, the strongest pattern I see is that respondents foresee more complexity, relative to a utopian lack of tradeoffs, at higher levels of organization. Physical devices are the most utopian, social institutions are the most complex, and software sits in the middle. The other possible result I see is that respondents foresee a less familiar social future. 

I also asked:

Which shapes the world more in the long run: the search for arrangements allowing better compromises regarding many complex tradeoffs, or fights between conflicting groups/values/perspectives?

In response, 43% said search for tradeoffs while 30% said value conflicts, and 27% said hard to tell. So these people see tradeoffs as mattering a lot.  

These respondents seriously disagree with science fiction, which usually describes relatively familiar social worlds in visibly changed physical contexts (and can’t be bothered to have an opinion on software). They instead say that the social world will change the most, becoming the most complex and/or radical. Oh brave new world, that has such institutions in it!

GD Star Rating
loading...
Tagged as: ,

Most Progress Not In Morals

Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best. Herodotus 440bc

Over the eons, we humans have greatly increased our transportation abilities. Long ago, we mostly walked everywhere. Then over time, we accumulated more ways to move ourselves and goods faster, cheaper, and more reliably, from boats to horses to gondolas to spaceships. Today, for most points A and B, our total cost to move from A to B is orders of magnitude cheaper than it would be via walking.

Even so, walking remains an important part of our transport portfolio. While we are able to move people who can’t walk, such as via wheelchairs, that is expensive and limiting. Yet while walking still matters, improvements in walking have contributed little to our long term gains in transport abilities. Most gains came instead from other transport methods. Most walking gains even came from other areas. For example, we can now walk better due to better boots, lighting, route planners, and paved walkways. Our ability to walk without such aids has improved much less.

As with transport, so with many other areas of life. Our ancient human abilities still matter, but most gains over time have come from other improvements. This applies to both physical and social tech. That is, to our space-time arrangements of physical materials and objects, and also to our arrangements of human actions, info and incentives.

Social scientists often use the term “institutions” broadly to denote relatively stable components social arrangements of actions, info and incentives. Some of the earliest human institutions were language and social norms. We have modestly improved human languages, such as via expanded syntax forms and vocabulary. And over history humans have experimented with a great range of social norms, and also with new ways to enforce them, such as oaths, law, and CCTV.

We still rely greatly on social norms to manage small families, work groups and friend groups. As with walking, while we could probably manage such groups in other ways, doing so would be expensive and limiting. So social norms still matter. But as with our walking, relatively little of our gains overtime has come from improving our ancient institution of social norms.

When humans moved to new environments, such as marshes or antic tundra, they had to adapt their generic walking methods to these new contexts. No doubt learning and innovation was involved in that process. Similarly, we no doubt continue to evolve our social norms and their methods of enforcement to deal with changing social contexts. Even so, social norm innovation seems a small part of total institutional innovation over the eons.

With walking, we seem well aware that walking innovation has only been a small part of total transport innovation. But we humans were built to at least pretend to care a lot about social norms. We consider opinions on and adherence to norms, and the shared values they support, to be central to saying who are “good” or “bad” people, and who we see as in “our people”. So we make norms central to our political fights. And we put great weight on norms when evaluating which societies are good, and whether the world has gotten better over time.

Thus each society tends to see its own origin, and the changes which led to its current norms, as enormously important and positive historical events. But if we stand outside any one society and consider the overall sweep of history, we can’t automatically count these as big contributions to long term innovation. After all, the next society is likely to change norms yet again. Most innovation is in accumulating improvements in all those other social institutions.

Now it is true that we have seen some consistent trends in attitudes and norms over the last few centuries. But wealth has also been rising, and having humans attitudes be naturally conditional on wealth levels seems a much better explanation of this fact than the theory that after a million years of human evolution we suddenly learned how to learn about norms. Yes it is good to adapt norms to changing conditions, but as conditions will likely change yet again, we can’t count that as long term innovation.

In sum: most innovation comes in additions to basic human capacities, not in tweaks to those original capacities. Most transport innovation is not in improved ways to walk, and most social institution innovation is not in better social norms. Even if each society would like to tell itself otherwise. To help the future the most, search more for better institutions, less for better norms.

GD Star Rating
loading...
Tagged as: ,

Open Policy Evaluation

Hypocrisy is a tribute vice pays to virtue. La Rochefoucauld, Maximes

In some areas of life, you need connections to do anything. Invitations to parties, jobs, housing, purchases, business deals, etc. are all gained via private personal connections. In other areas of life, in contrast, invitations are made open to everyone. Posted for all to see are openings for jobs, housing, products to buy, business investment, calls for proposals for contracts and grants, etc. The connection-only world is often suspected of nepotism and corruption, and “reforms” often take the form of requiring openings to be posted so that anyone can apply.

In academia, we post openings for jobs, school attendance, conference attendance, journal publications, and grant applications for all to see. Even though most people know that you’ll actually need personal connections to have much of a chance for many of these things. People seems to want to appear willing to consider an application from anyone. They allow some invitation-only conferences, talk series, etc., but usually insist that such things are incidental, not central to their profession.

This preference for at least an appearance of openness suggests a general strategy of reform: find things that are now only gained via personal connections, and create an alternate open process whereby anyone can officially apply. In this post, I apply this idea to: policy proposals.

Imagine that you have a proposal for a better policy, to be used by governments, businesses, or other organizations. How can you get people to listen to your proposal, and perhaps endorse it or apply it? You might try to use personal connections to get an audience with someone at a government agency, political interest group, think tank, foundation, or business. But that’s stuck in the private connection world. You might wait for an agency or foundation to put out an open call for proposals, seeking a solution to exactly the problem your proposal solves. But for any one proposal idea, you might wait a very long time.

You might submit an article to an open conference or journal, or submit a book to a publisher. But if they accept your submission, that mostly won’t be an endorsement of whether your proposal is good policy by some metric. Publishers are mostly looking at other criteria, such as whether you have an impressive study using difficult methods, or whether you have a book thesis and writing style that will attract many readers.

So I propose that we consider creating an open process for submitting policy proposals to be evaluated, in the hope of gaining some level of endorsement and perhaps further action. This process won’t judge your submission on wit, popularity, impressiveness, or analytical rigor. Their key question is: is this promising as a policy proposal to actually adopt, for the purpose of making a better world? If they endorse your proposal, then other actors can use that as a quality signal regarding what policy proposals to consider.

Of course how you judge a policy proposal depends on your values. So there might be different open policy evaluators (OPE) based on different sets of values. Each OPE needs to have some consistent standards by which they evaluate proposals. For example, economists might ask whether a proposal improves economic efficiency, libertarians might ask if it increases liberty, and progressives might ask whether it reduces inequality.

Should the evaluation of a proposal consider whether there’s a snowball chance in hell of a proposal being actually adopted, or even officially considered? That is, whether it is in the “Overton window”? Should they consider whether you have so far gained sufficient celebrity endorsements to make people pay attention to your proposal? Well, those are choices of evaluation criteria. I’m personally more interested in evaluating proposals regardless of who has supported them, and regardless of their near-term political feasibility. Like how academics say we do today with journal article submissions. But that’s just me.

An OPE seems valid and useful as long as its actual choices of which policies it endorses match its declared evaluation criteria. Then it can serve as a useful filter, between people with innovative policy ideas and policy customers seeking useful ideas to consider and perhaps implement. If you can find OPEs who share your evaluation criteria, you can consider the policies they endorse. And of course if we ever end up having many of them, you could focus first on the most prestigious ones.

Ideally an OPE would have funding from some source to pay for its evaluations. But I could also imagine applicants having to pay a fee to have their proposals considered.

GD Star Rating
loading...
Tagged as: , ,

News Accuracy Bonds

Fake news is a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media. This false information is mainly distributed by social media, but is periodically circulated through mainstream media. Fake news is written and published with the intent to mislead in order to damage an agency, entity, or person, and/or gain financially or politically, often using sensationalist, dishonest, or outright fabricated headlines to increase readership, online sharing, and Internet click revenue. (more)

One problem with news is that sometimes readers who want truth instead read (or watch) and believe news that is provably false. That is, a news article may contain claims that others are capable of proving wrong to a sufficiently expert and attentive neutral judge, and some readers may be fooled against their wishes into believing such news.

Yes, news can have other problems. For example, there can be readers who don’t care much about truth, and who promote false news and its apparent implications. Or readers who do care about truth may be persuaded by writing whose mistakes are too abstract or subtle to prove wrong now to a judge. I’ve suggested prediction markets as a partial solution to this; such markets could promote accurate consensus estimates on many topics which are subtle today, but which will eventually become sufficiently clear.

In this post, however, I want to describe what seems to me the simple obvious solution to the more basic problem of truth-seekers believing provably-false news: bonds. Those who publish or credential an article could offer bonds payable to anyone who shows their article to be false. The larger the bond, the higher their declared confidence in their article. With standard icons for standard categories of such bonds, readers could easily note the confidence associated with each news article, and choose their reading and skepticism accordingly.

That’s the basic idea; the rest of this post will try to work out the details.

While articles backed by larger bonds should be more accurate on average, the correlation would not be exact. Statistical models built on the dataset of bonded articles, some of which eventually pay bonds, could give useful rough estimates of accuracy. To get more precise estimates of the chance that an article will be shown to be in error, one could create prediction markets on the chance that an individual article will pay a bond, with initial prices set at statistical model estimates.

Of course the same article should have a higher chance of paying a bond when its bond amount is larger. So even better estimates of article accuracy would come from prediction markets on the chance of paying a bond, conditional on a large bond amount being randomly set for that article (for example) a week after it is published. Such conditional estimates could be informative even if only one article in a thousand is chosen for such a very large bond. However, since there are now legal barriers to introducing prediction markets, and none to introducing simple bonds, I return to focusing on simple bonds.

Independent judging organizations would be needed to evaluate claims of error. A limited set of such judging organizations might be certified to qualify an article for any given news bond icon. Someone who claimed that a bonded article was in error would have to submit their evidence, and be paid the bond only after a valid judging organization endorsed their claim.

Bond amounts should be held in escrow or guaranteed in some other way. News firms could limit their risk by buying insurance, or by limiting how many bonds they’d pay on all their articles in a given time period. Say no more than two bonds paid on each day’s news. Another option is to have the bond amount offered be a function of the (posted) number of readers of an article.

As a news article isn’t all true or false, one could distinguish degrees of error. A simple approach could go sentence by sentence. For example, a bond might pay according to some function of the number of sentences (or maybe sentence clauses) in an article shown to be false. Alternatively, sentence level errors might be combined to produce categories of overall article error, with bonds paying different amounts to those who prove each different category. One might excuse editorial sentences that do not intend to make verifiable newsy claims, and distinguish background claims from claims central to the original news of the article. One could also distinguish degrees of error, and pay proportional to that degree. For example, a quote that is completely made up might be rated as completely false, while a quote that is modified in a way that leaves the meaning mostly the same might count as a small fractional error.

To the extent that it is possible to verify partisan slants across large sets of articles, for example in how people or organizations are labeled, publishers might also offer bonds payable to those than can show that a publisher has taken a consistent partisan slant.

A subtle problem is: who pays the cost to judge a claim? On the one hand, judges can’t just offer to evaluate all claims presented to them for free. But on the other hand, we don’t want to let big judging fees stop people from claiming errors when errors exist. To make a reasonable tradeoff, I suggest a system wherein claim submissions include a fee to pay for judging, a fee that is refunded double if that claim is verified.

That is, each bond specifies a maximum amount it will pay to judge that bond, and which judging organizations it will accept.  Each judging organization specifies a max cost to judge claims of various types. A bond is void if no acceptable judge’s max is below that bond’s max. Each submission asking to be paid a bond then submits this max judging fee. If the judges don’t spend all of their max judging fee evaluating this case, the remainder is refunded to the submission. It is the amount of the fee that the judges actually spend that will be refunded double if the claim is supported. A public dataset of past bonds and their actual judging fees could help everyone to estimate future fees.

Those are the main subtleties that I’ve considered. While there are ways to set up such a system better or worse, the basic idea seems robust: news publishers who post bonds payable if their news is shown to be wrong thereby credential their news as more accurate. This can allow readers to more easily avoid believing provably-false news.

A system like that I’ve just proposed has long been feasible; why hasn’t it been adopted already? One possible theory is that publishers don’t offer bonds because that would remind readers of typical high error rates:

The largest accuracy study of U.S. papers was published in 2007 and found one of the highest error rates on record — just over 59% of articles contained some type of error, according to sources. Charnley’s first study [70 years ago] found a rate of roughly 50%. (more)

If bonds paid mostly for small errors, then bond amounts per error would have to be very small, and calling reader attention to a bond system would mostly remind them of high error rates, and discourage them from consuming news.

However, it seems to me that it should be possible to aggregate individual article errors into measures of overall article error, and to focus bond payouts on the most mistaken “fake news” type articles. That is, news error bonds should mostly pay out on articles that are wrong overall, or at least quite misleading regarding their core claims. Yes, a bit more judgment might be required to set up a system that can do this. But it seems to me that doing so is well within our capabilities.

A second possible theory to explain the lack of such a system today is the usual idea that innovation is hard and takes time. Maybe no one ever tried this with sufficient effort, persistence, or coordination across news firms. So maybe it will finally take some folks who try this hard, long, and wide enough to make it work. Maybe, and I’m willing to work with innovation attempts based on this second theory.

But we should also keep a third theory in mind: that most news consumers just don’t care much for accuracy. As we discuss in our book The Elephant in the Brain, the main function of news in our lives may be to offer “topics in fashion” that we each can all riff on in our local conversations, to show off our mental backpacks of tools and resources. For that purpose, it doesn’t much matter how accurate is such news. In fact, it might be easier to show off with more fake news in the mix, as we can then show off by commenting on which news is fake. In this case, news bonds would be another example of an innovation designed to give us more of what we say we want, which is not adopted because we at some level know that we have hidden motives and actually want something else.

GD Star Rating
loading...
Tagged as: , , ,

Aaronson on Caplan

Scott Aaronson just reviewed Caplan’s Case Against Education. He seems to accept most of Caplan’s specific analysis and claims:

It’s true that a large fraction of what passes for education doesn’t deserve the name—even if, as a practical matter, it’s far from obvious how to cut that fraction without also destroying what’s precious and irreplaceable. He’s right that there’s no sense in badgering weak students to go to college … we should support vocational education … Nor am I scandalized by the thought of teenagers apprenticing themselves to craftspeople. … From adolescence onward, I think that enormous deference ought to be given to students’ choices.

And yet he can’t endorse Caplan’s recommendation:

I’m not sure I want to live in the world of Caplan’s “complete separation of school and state.” … There’s not a single advanced country on earth that’s done what he advocates; the trend has everywhere been in the opposite direction. … Show me a case where this has worked. … In any future I can plausibly imagine where the government actually axes education, the savings go to things like enriching the leaders’ cronies and launching vanity wars.

You gotta distinguish Caplan’s favorite option, which is extreme, from the obvious cautious advice based on his book. Maybe huge school cuts haven’t been tried, but small cuts are being tried all the time, and the data Caplan points to suggests that we suffer little harm from those. Its overwhelmingly obvious that most such small cuts are not mainly spent “enriching the leaders’ cronies and launching vanity wars.” They are put toward all other government spending, and rebated to taxpayers. So the obvious advice here is to try somewhat bigger cuts, and slowly increase them as as long as things seem to be going okay.

Aaronson is also reluctant to cut school funding for fear of destroying innovation:

OK, but if professors are at least good at producing more people like themselves, able to teach and do research, isn’t that something, a base we can build on that isn’t all about signalling? And more pointedly: if this system is how the basic research enterprise perpetuates itself, then shouldn’t we be really damned careful with it, lest we slaughter the golden goose? …

It’s easy to look at most basic research, and say: this will probably never be useful for anything. But then if you survey the inventions that did change the world over the past century—the transistor, the laser, the Web, Google—you find that almost none would have happened without what Caplan calls “ivory tower self-indulgence.” What didn’t come directly from universities came from entities (Bell Labs, DARPA, CERN) that wouldn’t have been thinkable without universities, and that themselves were largely freed from short-term market pressures by governments. …

I work in theoretical computer science: … the stuff we use cutting-edge math for might itself be dismissed as “ivory tower self-indulgence.” Except then the cryptographers building the successors to Bitcoin, or the big-data or machine-learning people, turn out to want the stuff we were talking about at conferences 15 years ago. … There’s also math that struck me as boutique scholasticism, until … someone else finally managed to explain … [that its] almost like an ordinary applied engineering question, albeit one from the year 2130 or something.”

Yes of course, where government supports most basic research, most good work is funded by government. But this hardly implies that basic research is crucial, or that enough wouldn’t happen without government support. And as US governments spends roughly 25 times as much on schools as on basic research, we could double basic research funding while cutting school funding by only 5%, and have plenty left over. And even today 56% of U.S. basic research is funded outside of government.

More important, my reading of the innovation literature is that high prestige academics tend to vastly exaggerate the economic value of their work. Most economically-relevant innovation is not driven by basic research, and observed variations in basic research funding don’t much predict variations in rates of innovation. Cuts to government funding would move some basic researchers to private funding, and some to other activities. This wouldn’t hurt economic growth much, and might even help it.

GD Star Rating
loading...
Tagged as: ,

Harnessing Polarization

Human status competition can be wasteful. For example, often many athletes all work hard to win a contest, yet if they had all worked only half as hard, the best one could still have won. Many human societies, however, have found ways to channel status efforts into more useful directions, by awarding high status for types of effort of which there might otherwise be too little. For example, societies have given status to successful peace-makers, explorers, and innovators.

Relative to history and the world, the US today has unusual high levels of political polarization. A great deal of effort is going into people showing loyalty to their side and dissing the opposing side. Which leads me to wonder: could we harness all this political energy for something more useful?

Traditionally in a two party system, each party competes for the endorsement of marginal undecided voters, and so partisans can be enticed to work to produce better outcomes when their party is in power. But random variation in context makes it harder to see partisan quality from outcomes. And in a hyper partisan world, there aren’t many undecided voters left to impress.

Perhaps we could create more clear and direct contests, where the two political sides could compete to do something good. For example, divide Detroit or Puerto Rico into two dozen regions, give each side the same financial budget, political power, and a random half of the regions to manage. Then let us see which side creates better regions.

Political decision markets might also create more clear and direct contests. It is hard to control for local random factors in making statistical comparisons of polities governed by different sides. But market estimates of polity outcomes conditional on who is elected should correct for most local context, leaving a clearer signal of who is better.

These are just two ideas off the top of my head; who can find more ways that we might harness political polarization energy?

Added 28Sep: Notice that these contests don’t have to actually be fair. They just have to induce high efforts to win them. For that, merely believing that others may see them as fair could be enough.

GD Star Rating
loading...
Tagged as: ,

Cowen On Complacency

A week ago I summarized and critiqued five books wherein Peter Turchin tries to document and explain two key historical cycles: a several century cycle of empires rising and falling, and a fifty year alternating-generations cycle of instability during empire low points. In his latest book, Turchin tentatively tries to apply his theories to predict the U.S. near future.

In his new book The Complacent Class, Tyler Cowen also takes a bigger-than-usual historical perspective, invokes cycles, and predicts the U.S. near future. But instead of applying a theory abstracted from thousands of years of data, Cowen mainly just details many particular trends in the U.S. over the last half century. David Brooks summarizes:

Cowen shows that in sphere after sphere, Americans have become less adventurous and more static.

The book page summarizes:

Our willingness to move, take risks, and adapt to change have produced a dynamic economy. .. [But] Americans today .. are working harder than ever to avoid change. We’re moving residences less, marrying people more like ourselves and choosing our music and our mates based on algorithms. .. This cannot go on forever. We are postponing change,.. but ultimately this will make change, when it comes, harder. .. eventually lead to a major fiscal and budgetary crisis.

In each particular area, Cowen documents specific trends, and he often offers specific local theories that could have led one to expect such trends. For example, he says fewer geographic moves are predicted from fewer job moves, and fewer job moves are predicted by workers being older. But when it comes to the question of why all these particular trends with their particular causes happen to create a consistent overall trend toward complacency, Cowen seems to me coy. Let me discuss three passages where I find that he at least touches on general accounts. Continue reading "Cowen On Complacency" »

GD Star Rating
loading...
Tagged as: , ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,

Trump, Political Innovator

People are complicated. Not only can each voter be described by a very high dimensional space of characteristics, the space of possible sets of voters is even larger. Because of this, coalition politics is intrinsically complex, making innovation possible and relevant.

That is, at any one time the existing political actors in some area use an existing set of identified political coalitions, and matching issues that animate them. However, these existing groups are but a tiny part of the vast space of possible groups and coalitions. And even if one had exhaustively searched the entire space and found the very best options, over time those would become stale, making new better options possible.

As usual in innovation, each actor can prefer to free-ride on the efforts of others, and wait to make use of new coalitions that others have worked to discover. But some political actors will more explore new possible coalitions and issues. Most will probably try to for a resurgence of old combinations that worked better in the past than they have recently. But some will try out more truly new combinations.

We expect those who innovate politically to differ in predictable ways. They will tend to be outsiders looking for a way in, and their personal preferences will less well match existing standard positions. Because innovators must search the space of possibilities, their positions and groups will be vaguer and vary more over time, and they will less hew to existing rules and taboos on such things. They will more often work their crowds on the fly to explore their reactions, relative to sticking to prepared speeches. Innovators will tend to arise more when power is more up for grabs, with many contenders. Successful innovation tends to be a surprise, and is more likely the longer it has been since a major innovation, or “realignment,” with more underlying social change during that period. When an innovator finds a new coalition to represent, that coalition will be less attracted to this politician’s personal features and more to the fact that someone is offering to represent them.

The next US president, Donald Trump, seems to be a textbook political innovator. During a period when his party was quite up for grabs with many contenders, he worked his crowds, taking a wide range of vague positions that varied over time, and often stepped over taboo lines. In the process, he surprised everyone by discovering a new coalition that others had not tried to represent, a group that likes him more for this representation than his personal features.

Many have expressed great anxiety about Trump’s win, saying that he is is bad overall because he induces greater global and domestic uncertainly. In their mind, this includes a higher chances of wars, coups, riots, collapse of democracy, and so on. But overall these seem to be generic consequences of political innovation. Innovation in general is disruptive and costly in the short run, but can aide adaptation in the long run.

So you can dislike Trump for two very different reasons, First, you can dislike innovation on the other side of the political spectrum, as you see that coming at the expense of your side. Or, or you can dislike political innovation in general. But if innovation is the process of adapting to changing conditions, it must be mostly a question of when, not if. And less frequent innovations probably result in bigger changes, which is probably more disruptive overall.

So what you should really be asking is: what were the obstacles to smaller past innovations in Trump’s new direction? And how can we reduce such obstacles?

GD Star Rating
loading...
Tagged as: ,