Monthly Archives: December 2016

Dream Themes

The following are the twenty most frequent dream themes recalled by 1181 Canadian and 1186 Hong Kong college freshmen (most frequent first):


Note that these vary greatly in realism. Some are common events, and some are rare events that were were important for our ancestors. Some are about events that never actually happen: flying, being a child again, person now dead as alive, being in a story. Many of these are tied to rare extremes, especially negative extremes. The themes of arriving too late and failing exams are strikingly modern, and suggest that we industrial folks are often quite traumatized by our era’s event timing and school exam requirements.

GD Star Rating
Tagged as: ,

Trade Engagement?

First, let me invite readers, especially longtime/frequent readers, to suggest topics for me to blog on. I try to pick topics that are important, neglected, and where I can find something original and insightful to say. But I also like to please readers, and maybe I’m forgetting/missing topics that you could point out.

Second, many of my intellectual projects remain limited by a lack of engagement. I can write books, papers, and blog posts, but to have larger intellectual impact I need people to engage my ideas. Not to agree or disagree with them, but to dive into and critique the details of my arguments, and then publicly describe their findings. (Yes, journal referees engage submissions to some extent, but it isn’t remotely enough.)

This is more useful to me when such engagers have more relevant ability, popularity, and/or status. Since I also have modest ability, popularity, and status, at least in some areas, this suggests the possibility of mutually beneficial trade. I engage your neglected ideas and you engage mine. Of course there are many details to work out to arrange such trade.

First, there’s timing. I don’t want to put in lots of work engaging your ideas based on a promise that you’ll later engage mine, and then have you renege. So we may need to start small, back and forth. Or you can go first.

Second, there’s the issue of relative price. If we have differing levels of ability, popularity, and status, then we should agree to differing relative efforts to reflect those differences. If you are more able than I, maybe I should engage several ideas of yours in trade for your only engaging one of mine.

Third, we may disagree about our relevant differences. While it may be easy to quickly demonstrate one’s popularity, status, and overall intelligence, it can be harder to demonstrate one’s other abilities relevant to a particular topic. Yes if I read a bunch of your papers I might be able to see that your ability is higher than your status would suggest, but I might not have time for that.

Fourth, we may each fear adverse selection. Why should I be so stupid as to join a club that would stoop so low as to consider me as a member? The fact that you are seeking to trade for engagement, and willing to consider me as a trading partner, makes me suspect that your ideas, ability, and status are worse than they appear.

Fifth, we might prefer to disguise our engagement trade. When engagement is often a side effect of other processes, then it might look bad to go out of your way to trade engagements. (Trading engagement for money or sex probably looks even worse.) So people may prefer to hide their engagement trades within other process that give plausible deniability about such trades. I just happened to invite you to talk at my seminar series after you invited me to talk at yours; move along, no trade to see here.

These are substantial obstacles, and may together explain the lack of observed engagement trades. Even so, I suspect people haven’t tried very hard to overcome such obstacles, and in the spirit of innovation I’m willing to explore such possibilities, at least a bit. My neglected ideas include em futures, hidden motives, decision markets, irrational disagreement, mangled worlds, and more.

GD Star Rating
Tagged as: , ,

Avoid “Posthuman” Label

Philosophy is mainly useful in inoculating you against other philosophy. Else you’ll be vulnerable to the first coherent philosophy you hear. (source)

Long ago (’81-83 at U Chicago) I studied Conceptual Foundations of Science (mainly philosophy of science) because I wanted to really understand this “science” thing, and the main thing I learned was to avoid the word “science”. If necessary, the word can refer to obvious social groups and how they maintain boundaries, but beyond that other words and concepts are more useful.

I’ve always felt similarly wary of “transhuman” and “posthuman”, because it isn’t clear what they can or do mean. In the latest Bioethics, David Lawrence elaborates an argument for such wariness:

Human is itself a greatly abused term, especially in the context of the enhancement/posthuman debate, and the myriad of meanings ascribed to it could give posthuman a very different slant depending on ones understanding. .. There are, perhaps, three main senses in which the term human is frequently employed- the biological, the moral, and the self- (or other-) idealizing. In the first of these, human .. refer[s] to our taxonomic species, In the second sense, human generally refers to a community of beings which qualify as having a certain moral value or status; and the third .. denoting .. what matters about those who matter. ..

It is a mistake to envisage the posthuman as a different species. It is a mistake to imagine traits such as immortality or godlike powers as being changes that indicate a significant discontinuity. .. The mere act of assigning terminology is inherently one of division. .. The use of these terms is designed to classify and separate. As I hope to have shown, this is precisely the problem with the notional posthuman. ..

The commentators on both sides of the debate concerning the meaning of posthuman do so as if it had currency. .. To use the term to imply species or value change, or a radical transition (the meaning of which is unclear in any case), there needs to be justification in a way which does not seem to have been delivered within the existing dialogue. Here, I have argued that this is not a plausible understanding, and furthermore that it is based in error. The analogous changes we have undergone throughout our history have not been thought to signal a qualitative change, or at least, not to any significant degree. We are, today, post-internet age humans; we are post-neolithic, post-bronze age, post-iron age. These transitions have not changed our value or the nature of our being: machine-age man, Homo augmentus, is still man. The touted posthuman is, in general, overhyped and unwarranted by the evidence – either factual, or conceptual – and does not seem to have been subject to a close analysis until now.

Here’s what Lawrence suggests we say instead:

Enhancement technologies exist, are used, and will continue to develop; and it is idle to claim that we ought avoid them wholesale. .. It is important that we find a way to reconcile ourselves with the beings we may become, since they and we are products of the same process. .. To be posthuman is in truth to be more human than human – more successful at embodying these traits than we, who consider ourselves the model of humanity, do. It is not, as critics may claim, to be beyond, to be something to fear, something fundamentally different.

A habit of talking as if there will be a natural progression from “human” to “transhuman” to “posthuman” makes our descendants by default into “others” less worthy of our help and allegiance, without specifying the key traits on which they will be deficient. Yes, it is possible that our descendants will in fact have traits we dislike so much as to make us reject them as no longer part of the “us” that matters. But this is hardly inevitable, and those who argue that it will happen should have to specify the particular key traits they expect will cause such a divergence.

Only half those who imagine entering a star trek transporter see the person who exits as themselves, but all those who imagine exiting see the person entering as themselves. Similarly, we tend to see all our ancestors for the last million years as part of the “us” that matters, even though many of them might reject us as being part of the “us” that matters to them. And so our descendants are more likely to see us today as part of the “us” that matters to them, compared to our seeing them in that way.

So let us talk first of the various kinds of descendants we may have, the traits by which they may differ from us, and which of those traits matter most to us in deciding who matters. After that, perhaps, we might argue about which descendants will become a “them” who matter much less to us. We could perhaps call such folks “posthuman,” but know that they will probably reject such a label.

GD Star Rating
Tagged as: ,

Missing Credentials

The typical modern credential (i.e., standard worker quality sign of widely understood significance) is based on a narrow written declared test of knowledge given early in one’s career on a pre-announced date at a quiet location. In this test, there is a list of questions to which one gives answers, answers then graded by independent judges who supposedly look only at the answers, and don’t take into account other things they know about the testee. In this post I want to point out that a much larger space of credentials are possible.

For example, you could be evaluated on actual products and contributions, based on your efforts over a long period, instead of being evaluated on short tests. You could be tested via tasks you must perform, instead of questions you must answer. After all, mostly we want to know what workers can do, not what questions they can answer. Since much of real question answering in the world is done verbally, test question-answering could also be done verbally, instead of in writing. And it could be done with frequent distractions and interruptions, as with most real question-answering.

However expressed, judges could take your first response as a starting point to ask you more questions (or give you more tasks), and dig deeper into your understanding. Judges could know you well, and choose questions specifically for you, and interpret your answers given all they know about you. This is, after all, closer to how most question-answering in the world actually goes.

Tests could be done at random days and times, and spread all through your career. Tests might be disguised as ordinary interactions, and not revealed to be tests until afterward. These approaches could discourage cramming for tests and other strategies that makes you good only at tests, and not so much at remembering or using your knowledge at other times.

Finally, you could be tested on your ability to integrate knowledge from a wide range of topic areas, instead of on your knowledge of a narrow topic area. Yes you could show that you know many areas via passing tests for many areas, but that won’t show that you have integrated these diverse areas usefully together in your mind.

Of course I’m not saying that these variations are never explored, just that they are used much less often than the standard credential test. This vast space of possible credentials suggests that a lot of innovation may be possible, and I’m naturally especially interested in helping to develop better credentials for abilities that I have which are neglected by the usual credentials. For example, I’d love to see a polymath credential, for those who can integrate understanding of many fields, and a conversation credential, on one’s ability to get to the bottom of topics via a back & forth interaction.

The narrow range of most credentials compared to the vast possible space also seems to confirm Bryan Caplan’s emphasis on school as emphasizing and screening conformity. Yes the the usual kinds of tests can often be cheaper in many ways, but the lack of much variation even when credentials are very important, and so worth spending a bit more on, suggests that conformity is also an issue. It really does seem that people see non-standard tests as illicit in many ways.

The dominance of the usual credential test can also be seen as a way our society is unfairly dominated by the sort of writing-focused book-smart narrowly-skilled people who happen to be especially good at such tests. These people are in fact usually in charge of designing such tests.

GD Star Rating
Tagged as: ,

When Is Talk Meddling Okay?

“How dare X meddle in Y’s business on Z?! Yes, X only tried to influence Y people on Z by talking, and said nothing false. But X talked selectively, favoring one position over another!”

Consider some possible triples X,Y,Z:

  • How dare my wife’s friend meddle in my marriage by telling my wife I treat her poorly?
  • How dare John try to tempt my girlfriend away from me by flirting with her?
  • How dare my neighbors tell my kids that they don’t make their kids do as many chores?
  • How dare Sue from another division suggest I ask too much overtime of my employees?
  • How dare V8 try to tempt cola buyers to switch by dissing cola ingredients?
  • How dare economists say that sociologists keep PhD students around too long?
  • How dare New York based media meddle in North Carolina’s transexual bathroom policy?
  • How dare westerners tell North Koreans that their government treats them badly?
  • How dare Russia tell US voters unflattering things about Hillary Clinton?

We do sometimes feel justly indignant at outsiders interfering in our “internal” affairs. In such cases, we prefer equilibria where we each stay out of others’ families, professions, or nations. But in many other contexts we embrace social norms that accept and even encourage criticism from a wide range of sources.

The usual (and good) argument for free speech (or really, free hearing) is that on average listeners can be better informed if they have access to more different info sources. Yes, it would be even better if each source fairly told everything relevant it knew, or at least didn’t select what it said to favor some views. But we usually think it infeasible to enforce norms against selectivity, and so limit ourselves to more enforceable norms against lying. As we can each adjust our response to sources based on our estimates of their selectivity, reasonable people can be better informed via having more sources to hear from, even when those sources are selective.

So why do we sometimes oppose such free hearing? Paternalism seems one possible explanation – we think many of us are unreasonable. But this fits awkwardly, as most expect themselves to be better informed if able to choose from more sources. More plausibly, we often don’t expect that we can limit retaliation against talk to other talk. For example, if you may respond with violence to someone overtly flirting with your girlfriend, we may prefer a norm against such overt flirting. Similarly, if nations may respond with war to other nations weighing in on their internal elections, we may prefer a norm of nations staying out of other nations’ internal affairs.

Of course the US has for many decades been quite involved in the internal affairs of many nations, including via assassination, funding rebel armies, bribery, academic and media lecturing, and selective information revelation. Some say Putin focused on embarrassing Clinton in retaliation for her previously supporting the anti-Putin side in Russian internal affairs. Thus it is hard to believe we really risk more US-Russian war if these two nations overtly talk about the others’ internal affairs.

Yes, we should consider the possibility that retaliation against talk will be more destructive than talk, and be ready to forgo the potentially large info gains from wider talk and criticism to push a norm against meddling in others’ internal affairs. But the international stage at the moment doesn’t seem close to such a situation. We’ve long since tolerated lots of such meddling, and the world is probably better for it. We should allow a global conversation on important issues, where all can be heard even when they speak selectively.

GD Star Rating
Tagged as: , , ,

Beware Futurism As Political Allegory

Imagine that you are junior in high school who expects to attend college. At that point in your life you have opinions related to frequent personal choices if blue jeans feel comfortable or if you prefer vanilla to chocolate ice cream. And you have opinions on social norms in your social world, like how much money it is okay to borrow from a friend, how late one should stay at a party, or what are acceptable excuses for breaking up with boy/girlfriend. And you know you will soon need opinions on imminent major life choices, such as what college to attend, what major to have, and whether to live on campus.

But at that point in life you will have less need of opinions on what classes to take as college senior, and where to live then. You know you can wait and learn more before making such decisions. And you have even less need of opinions on borrowing money, staying at parties, or breaking up as a college senior. Social norms on those choices will come from future communities, who may not yet have even decided on such things.

In general, you should expect to have more sensible and stable opinions related to choices you actually make often, and less coherent and useful opinions regarding choices you will make in the future, after you learn many new things. You should have less coherent opinions on how your future communities will evaluate the morality and social acceptability of your future choices. And your opinions on collective choices, such as via government, should be even less reliable, as your incentives to get those right are even weaker.

All of this suggests that you be wary of simply asking your intuition for opinions about what you or anyone else should do in strange distant futures. Especially regarding moral and collective choices. Your intuition may dutifully generate such opinions, but they’ll probably depend a lot on how the questions were framed, and the context in which questions were asked. For more reliable opinions, try instead to chip away at such topics.

However, this context-dependence is gold to those who seek to influence others’ opinions. Warriors attack where an enemy is weak. When seeking to convert others to a point of view, you can have only limited influence on topics where they have accepted a particular framing, and have incentives to be careful. But you can more influence how a new topic is framed, and when there are many new topics you can emphasize the few where your preferred framing helps more.

So legal advocates want to control how courts pick cases to review and the new precedents they set. Political advocates want to influence which news stories get popular and how those stories are framed. Political advocates also seek to influence the choices and interpretations of cultural icons like songs and movies, because being less constrained by facts such things are more open to framing.

As with the example above of future college choices, distant future choices are less thoughtful or stable, and thus more subject to selection and framing effects. Future moral choices are even less stable, and more related to political positions that advocates want to push. And future moral choices expressed via culture like movies are even more flexible, and thus more useful. So newly-discussed culturally-expressed distant future collective moral choices create a perfect storm of random context-dependent unreliable opinions, and thus are ideal for advocacy influence, at least when you can get people to pay attention to them.

Of course most people are usually reluctant to think much about distant future choices, including moral and collective ones. Which greatly limits the value of such topics to advocates. But a few choices related to distant futures have engaged wider audiences, such as climate change and, recently, AI risk. And political advocates do seem quite eager to influence such topics, due to their potency. They seem select such topics from a far larger set of similarly important issues, in part for their potency at pushing common political positions. The science-fiction truism really does seem to apply: most talk on the distant future is really indirect talk on our world today.

Of course the future really will happen eventually, and we should want to consider choices today that importantly influence that future, some of those choices will have moral and collective aspects, some of these issues can be expressed via culture like movies, and at some point such issue discussion will be new. But as with big hard problems in general, it is probably better to chip away at such problems.

That is: Anchor your thoughts to reality rather than to fiction. Make sure you have a grip on current and past behavior before looking at related future behavior. Try to stick with analyzing facts for longer before being forced to make value choices. Think about amoral and decentralized choices carefully before considering moral and collective ones. Avoid feeling pressured to jump to strong conclusions on recently popular topics. Prefer robust and reliable methods even when they are less easy and direct. Mostly the distant future doesn’t need action today – decisions will wait a bit for us to think more carefully.

GD Star Rating
Tagged as: , ,

Chip Away At Hard Problems

Catherine: And your own research.
Harold: Such as it is.
C: What’s wrong with it?
H: The big ideas aren’t there.
C: Well, it’s not about big ideas. It’s… It’s work. You got to chip away at a problem.
H: That’s not what your dad did.
C: I think it was, in a way. I mean, he’d attack a problem from the side, you know, from some weird angle. Sneak up on it, grind away at it.
(Lines from movie Proof; Catherine is a famous mathematician’s daughter.)

In math, plausibility arguments don’t count for much; proofs are required. So math folks have little choice but to chip away at hard problems, seeking weird angles where indirect progress may be possible.

Outside of math, however, we usually have many possible methods of study and analysis. And a key tradeoff in our methods is between ease and directness on the one hand, and robustness and rigor on the other. At one extreme, you can just ask your intuition to quickly form a judgement that’s directly on topic. At the other extreme, you can try to prove math theorems. In between these extremes, informal conversation is more direct, while statistical inference is more rigorous.

When you need to make an immediate decision fast, direct easy methods look great. But when many varied people want to share an analysis process over a longer time period, more robust rigorous methods start to look better. Easy direct easy methods tend to be more uncertain and context dependent, and so don’t aggregate as well. Distant others find it harder to understand your claims and reasoning, and to judge their reliability. So distant others tend more to redo such analysis themselves rather than building on your analysis.

One of the most common ways that wannabe academics fail is by failing to sufficiently focus on a few topics of interest to academia. Many of them become amateur intellectuals, people who think and write more as a hobby, and less to gain professional rewards via institutions like academia, media, and business. Such amateurs are often just as smart and hard-working as professionals, and they can more directly address the topics that interest them. Professionals, in contrast, must specialize more, have less freedom to pick topics, and must try harder to impress others, which encourages the use of more difficult robust/rigorous methods.

You might think their added freedom would result in amateurs contributing proportionally more to intellectual progress, but in fact they contribute less. Yes, amateurs can and do make more initial progress when new topics arise suddenly far from topics where established expert institutions have specialized. But then over time amateurs blow their lead by focusing less and relying on easier more direct methods. They rely more on informal conversation as analysis method, they prefer personal connections over open competitions in choosing people, and they rely more on a perceived consensus among a smaller group of fellow enthusiasts. As a result, their contributions just don’t appeal as widely or as long.

I must admit that compared to most academics near me, I’ve leaned more toward amateur styles. That is, I’ve used my own judgement more on topics, and I’ve been willing to use less formal methods. I clearly see the optimum as somewhere between the typical amateur and academic styles. But even so, I’m very conscious of trying to avoid typical amateur errors.

So instead of just trying to directly address what seem the most important topics, I instead look for weird angles to contribute less directly via more reliable/robust methods. I have great patience for revisiting the few biggest questions, not to see who agrees with me, but to search for new angles at which one might chip away.

I want each thing I say to be relatively clear, and so understandable from a wide range of cultural and intellectual contexts, and to be either a pretty obvious no-brainer, or based on a transparent easy to explain argument. This is partly why I try to avoid arguing values. Even so, I expect that the most likely reason I will fail is that that I’ve allowed myself to move too far in the amateur direction.

GD Star Rating
Tagged as: , ,

The Good-Near Bad-Far Bias

“Why am I late home from work? Terrible traffic slowed everyone down.”
“Why am I early home from work? I wanted to spend more time with you.”

We try to make ourselves look good. So we try to associate closely with good events, and distance ourselves more from bad events. Specifically, we prefer to explain bad events near us in terms of distant causes over which we had little influence, but explain good events near us in terms of our good long-lasting features, such as our authenticity, loyalty, creativity, or intelligence.

For example, managers are reluctant to adopt prediction markets for project deadlines, because it takes away their favorite excuse for failure: “The thing that delayed this project was a rare disaster that came out of left field; no one could have seen it coming.” Note that distant causes work best as excuses if they are rare and unpredictable. Otherwise there comes the question of why one didn’t do more to prevent or mitigate the distant influence.

As another example, when a class of people is doing poorly and we are reluctant to blame them, we prefer explanations far from their choices. So instead of blaming their self-control, laziness, or intelligence, we prefer to blame capitalism, general malaise, discrimination, foreigners, or automation. Recent over-emphasis on a sudden burst of automation as an unemployment cause comes in part from a perfect storm of not wanting to blame low-skilled workers, and wanting to brag about the technical prowess of groups we feel associated with.

Why don’t we blame close rivals more often, instead of distant causes? We do blame rivals sometimes, but if they retaliate by blaming us we risk ending up associated with a lot of blame. Better to keep the peace and both blame outsiders.

GD Star Rating
Tagged as:

Imagine A Mars Boom

Most who think they like the future really just like where their favorite stories took place. As a result, much future talk focuses on space, even though prospects for much activity beyond Earth anytime foreseeable seem dim. Even so, consider the following hypothetical, with three key assumptions:

Mars boom: An extremely valuable material (anti-matter? glueballs? negative mass?) is found on Mars, justifying huge economic efforts to extract it, process it, and return it to Earth. Many orgs compete strongly against one another in all of these stages to profit from the Martian boom.

A few top workers: As robots just aren’t yet up to the task, a thousand humans must be sent to and housed on Mars. The cost of this is so great that all trips are one-way, at least for a while, and it is worth paying extra to get the very highest quality workers possible. So Martians are very impressive workers, and Mars is “where the action is” in terms of influencing the future. As slavery is rare on Earth, most all Mars workers must volunteer for the move.

Martians as aliens: Many, perhaps even most, people on Earth see those who live on Mars as aliens, for whom the usual moral rules do not apply – morality is to protect Earthlings only. Such Earth folks are less reluctant to enslave Martians. Martians undergo some changes to their body, and perhaps also to their brain, but when seen in films or tv, or when talked to via (20+min delayed) Skype, Martians act very human.

Okay, now my question for you is: Are most Martians slaves? Are they selected for and trained into being extremely docile and servile?

Slavery might let Martian orgs make Martians work harder, and thereby extract more profit from each worker. But an expectation of being enslaved should make it much harder to attract the very best human workers to volunteer. Many Earth governments may even not allow free Earthlings to volunteer to become enslaved Martians. So my best guess is that in this hypothetical, Martians are free workers, rich and high status celebrities followed and admired by most Earthlings.

I’ve created this Mars scenario as an allegory of my em scenario, because someone I respect recently told me they were persuaded by Bryan Caplan’s claim that ems would be very docile slaves. As with these hypothesized Martians, the em economy would produce enormous wealth and be where the action is, and it would result from competing orgs enticing a thousand or fewer of the most productive humans to volunteer for an expensive one-way trip to become ems. When viewed in virtual reality, or in android bodies, these ems would act very human. While some like Bryan see ems as worth little moral consideration, others disagree.

GD Star Rating
Tagged as: , ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
Tagged as: , , ,