Tag Archives: Tech

A Coming Hypocralypse?

Many people have been working hard for a long time to develop tech that helps to read people’s feelings. They are working on ways to read facial expressions, gazes, word choices, tones of voice, sweat, skin conductance, gait, nervous habits, and many other body features and motions. Over the coming years, we should expect this tech to consistently get cheaper and better at reading more subtler feelings of more people in more kinds of contexts more reliably.

Much of this tech will be involuntary. While your permission and assistance may help such tech to read you better, others will often be able to read you using tech that they control, on their persons or and in the buildings around you. They can use tech integrated with other complex systems that is thus hard to monitor and regulate. Yes, some defenses are possible, such as via wearing dark sunglasses or burqas, and electronically modulating your voice. But such options seem rather awkward and I doubt most people will be willing to use them much in most familiar social situations. And I doubt that regulation will greatly reduce the use of this tech. The overall trend seems clear: our true feelings will become more visible to people around us.

We are often hypocritical about our feelings. That is, we pretend to some degree to have certain acceptable public feelings, while actually harboring different feelings. Most people know that this happens often, but our book The Elephant in the Brain suggests that we still vastly underestimate typical levels of hypocrisy. We all mask our feelings a lot, quite often from ourselves. (See our book for many more details.)

These two facts, better tech for reading feelings and widespread hypocrisy, seem to me to be on a collision course. As a result, within a few decades, we may see something of a “hypocrisy apocalypse”, or “hypocralypse”, wherein familiar ways to manage hypocrisy become no longer feasible, and collide with common norms, rules, and laws. In this post I want to outline some of the problems we face.

Long ago, I was bullied as a child. And so I know rather well that one of the main defenses that children develop to protect themselves against bullies is to learn to mask their feelings. Bullies tend to see kids who are visibly scared or distraught as openly inviting them to bully. Similarly, many adults protect themselves from salespeople and sexual predators by learning to mask their feelings. Masked feelings also helps us avoid conflict with rivals at work and in other social circles. For example, we learn to not visibly insult or disrespect big people in rowdy bars if we don’t want to get beaten up.

Tech that unmasks feelings threatens to weaken the protections that masked feelings provide. That big guy in a rowdy bar may use new tech to see that everyone else there can see that you despise him, and take offense. You bosses might see your disrespect for them, or your skepticism regarding their new initiatives. Your church could see that you aren’t feeling very religious at church service. Your school and nation might see that your pledge of allegiance was not heart-felt. And so on.

While these seem like serious issues, change will be mostly gradual and so we may have time to flexibly search in the space of possible adaptations. We can try changing with whom we meet how for what purposes, and what topics we consider acceptable to discuss where. We can be more selective who we make more visible and how.

I worry more about collisions between better tech for reading feelings and common social norms, rules, and laws. Especially norms and laws that we adopt for more symbolic purposes, instead of to actually manage our interactions. These things tend to be less responsive to changing conditions.

For example, today we often consider it to be unacceptable “sexual harassment” to repeatedly and openly solicit work associates for sex, especially after they’ve clearly rejected the solicitor. We typically disapprove not just of direct requests, but also of less direct but relatively clear invitation reminders, such as visible leers, sexual jokes, and calling attention to your “junk”. And of course such rules make a great deal of sense.

But what happens when tech can make it clearer who is sexually attracted how much to whom? If the behavior that led to these judgements was completely out each person’s control, it might be hard to blame on anyone. We might then socially pretend that it doesn’t exist, though we might eagerly check it out privately. Unfortunately, our behavior will probably continue to modulate the processes that produce such judgements.

For example, the systems that judge how attracted you are to someone might focus on the moments when you directly look at that person, when your face is clearly visible to some camera, under good lighting. Without your wearing sunglasses or a burqa. So the longer you spend directly looking at someone under such conditions, the better the tech will be able to see your attraction. As a result, your choice to spend more time looking directly at them under favorable reading conditions might be seen as an intentional act, a choice to send the message that you are sexually attracted to them. And thus your continuing to do so after they have clearly rejected you might be seen as sexual harassment.

Yes, a reasonable world might adjust rules on sexual harassment to account for many complex changing conditions. But we may not live in a reasonable world. I’m not making any specific claims about sexual harassment rules, but symbolic purposes influence many of the norms and laws that we adopt. That is, we often support such rules not because of the good consequences of having them, but because we like the way that our personal support for such rules makes us look personally. For example, many support laws against drugs and prostitution even when they believe that such laws do little to discourage such things. They want to be personally seen as publicly taking a stand against such behavior.

Consider rules against expressing racism and sexism. And remember that the usual view is that everyone is at least a bit racist and sexist, in part because they live in a racist and sexist society. What happens when we can collect statistics on each person regarding how their visible evaluations of the people around them correlate with the race and sex of those people? Will we then punish white males for displaying statistically-significantly low opinions of non-whites and non-males via their body language? (That’s like a standard we often apply to firms today.) As with sexual harassment, the fact that people can moderate these readings via their behaviors may make these readings seem to count as intentional acts. Especially since they can be tracking the stats themselves, to see the impression they are giving off. To some degree they choose to visibly treat certain people around them with disrespect. And if we are individually eager to show that we personally disapprove of racism and sexism, we may publicly support strict application of such rules even if that doesn’t actually deal well with real problems of racism and sexism in the world.

Remember that this tech should improve gradually. So for the first cases that set key precedents, the tech will be weak and thus flag very few people as clearly harassers or racists or sexists. And those few exceptions are much more likely to be people who actually did intend to harass and express racism or sexism, and who embody extreme versions of such behavior. While they will also probably tend to be people who are weird and non-conformist in other ways, this tech for reading feelings may initially seem to do well to help us identify and deal with problematic people. For example, we may be glad that tech can identity the priests who most clearly lust after the young boys around them.

But as the tech gets better it will slowly be able to flag more and more people as sending disapproved messages. The rate will drift upward from one person in ten thousand to one in a thousand to one percent and so on. People may then start to change their behavior in bigger ways, to avoid being flagged, but that may be too little too late, especially if large video, etc. libraries of old behaviors are available to process with new methods.

At this point we may reach a “hypocralypse”, where rules that punish hypocrisy collide in a big way with tech that can expose hypocrisy. That is, where tech that can involuntarily show our feelings intersects with norms and laws that punish the expression of common but usually hidden feelings. Especially when such rules are in part symbolically motivated.

What happens then, I don’t know. Do white males start wearing burqas, do we regulate this tech heavily, or do we tone down and relax our many symbolic rules? I’ll hope for the best, but I still fear the worst.

GD Star Rating
loading...
Tagged as: , , ,

Compulsory Licensing Of Backroom IT?

We now understand one of the main reasons that many leading firms have been winning relative to others, resulting in higher markups, profits, and wage inequality:

The biggest companies in every field are pulling away from their peers faster than ever, sucking up the lion’s share of revenue, profits and productivity gains. Economists have proposed many possible explanations: top managers flocking to top firms, automation creating an imbalance in productivity, merger-and-acquisition mania, lack of antitrust regulation and more. But new data suggests that … IT spending that goes into hiring developers and creating software owned and used exclusively by a firm is the key competitive advantage. It’s different from our standard understanding of R&D in that this software is used solely by the company, and isn’t part of products developed for its customers.

Today’s big winners went all in. …Tech companies such as Google, Facebook, Amazon and Apple—as well as other giants including General Motors and Nissan in the automotive sector, and Pfizer and Roche in pharmaceuticals—built their own software and even their own hardware, inventing and perfecting their own processes instead of aligning their business model with some outside developer’s idea of it. … “IT intensity,” is relevant not just in the U.S. but across 25 other countries as well. …

When new technologies were developed in the past, they would diffuse to other firms fast enough so that productivity rose across entire industries. … But imagine instead of power looms, someone is trying to copy and reproduce Google’s cloud infrastructure itself. … Things have just gotten too complicated. The technologies we rely on now are massive and inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part. … Walmart built an elaborate logistics system around bar code scanners, which allowed it to beat out smaller retail rivals. Notably, it never sold this technology to any competitors. (more)

A policy paper goes into more detail. First, why is the IT of some firms so much better?

Proprietary IT thus provides a specific mechanism that can help explain the reallocation to more productive firms, rising industry concentration, also growing productivity dispersion between firms within industries, and growing profit margins. … There is a significant literature that identifies IT-related differences in productivity arising from complementary skills, managerial practices, and business models that are themselves unevenly distributed. Skills and managerial knowledge needed to use major new technologies have often been unevenly distributed initially because much must be learned through experience, which tends to differ substantially from firm to firm.

Yes, skills vary, but there are also just big random factors in the success of large IT systems, even for similar skills. What can we do about all this?

While there may be other reasons to question antitrust policies, the general rise in industry concentration does not appear to raise troubling issues for antitrust enforcement at this point by itself. …

Both IP law and antitrust law pay heed to … balancing innovation incentives against the need for disclosure and competition, balancing concerns about market power against considerations of efficiency. … This balance has been lost with regard to information technology … the policy challenge is to offset this trend. … This problem might require some lessening of innovation incentives. … The challenge both today and in the future for both IP and antitrust policy is to facilitate the diffusion of new technical knowledge and right now the trend seems to be in the wrong direction. …

To the extent that rising use of employee noncompete agreements limits the ability of technical employees to take their skills to new firms, diffusion is slowed. Similarly, for extensions of trade secrecy law to cover knowhow or the presumption of inevitable disclosure. Patents are required to disclose the technical information needed to “enable” the invention, but perhaps these requirements are ineffective, especially in IT fields. And if patents are not licensed, they become a barrier to diffusion. Perhaps some forms of compulsory licensing might overcome this problem. Moreover, machine learning technologies portend even greater difficulties encouraging diffusion in the future because use of these technologies requires not only skilled employees, but also access to critical large datasets.

It seems that making good backroom software, to use internally, has become something of a natural monopoly. Creating such IT has large fixed costs and big random factors. So an obvious question is whether we can usefully regulate this natural monopoly. And one standard approach to regulating monopolies is to force them to sell to everyone at regulated prices. Which in this context we call “compulsory licensing”; firms could be forced to lease their backroom IT to other firms in the same industry at regulated prices.

Note that while compulsory licensing of patents is rare in the US, it is common worldwide, and it one of the reasons that US drug firms get proportionally less of their revenue from outside the US; other nations force them to license their patents at particular low prices. So worldwide there is a lot of precedent for compulsory licensing.

The article above claimed that backroom IT is:

inextricably linked to the engineers, workers, systems and business models built around them. … While in the past it might have been possible to license, steal or copy someone else’s technology, these days that technology can’t be separated from the systems of which it’s a part.

I’m not yet convinced of this, and so I want to hear independent IT folks weigh in on this key question. I can see that different IT subsystems could be mixed up with each other, but I’m less convinced that the total set of backroom IT of a firm depends that much on its particular products and services. Maybe other firms in an industry would have to take the entire backroom IT bundle of the leading firm, rather than being able to pick and choose among subsystems. But when the leading IT bundle is so much better, I could see this option being attractive to the other firms.

The leading firm might incur some costs in making its IT package modular enough to separate it from its particular products and services. But such modularity is a good design discipline, and a compulsory licensing regime could compensate firms for such costs.

Note that I’m not saying that it is obvious that this is a good solution. I’m just saying that this is a standard obvious policy response to consider, so someone should be looking into it. At the moment I’m not seeing other good options, aside from just accepting the increased IT-induced firm inequality and its many consequences.

Added 12:30: Okay, so far the pretty consistent answer I’ve heard is that it is very hard to take software written for internal use and make it available for outside use. Even if you insist outsiders do things your way.

So assuming we are stuck with industry leaders winning big compared to others due to better IT, one worry for the future is what happens when leaders of different industries start to coordinate their IT with each other. Like phone firms are now coordinating with car firms. Such firms might merge to encourage their synergies. They we might have single firms as big winning leaders in larger economic sectors.

GD Star Rating
loading...
Tagged as: , ,

Radical Markets

In 1997, I got my Ph.D. in social science from Caltech. The topic that drew me into grad school, and much of what I studied, was mechanism and institution design: how to redesign social practices and institutions. Economists and related scholars know a lot about this, much of which is useful for reforming many areas of life. Alas, the world shows little interest in these reforms, and I’ve offered our book The Elephant in the Brain: Hidden Motives in Everyday Life, as a partial explanation: most reforms are designed to give us more of what we say we want, and at some level we know we really want something else. While social design scholars would do better to work more on satisfying hidden motives, there’s still much useful in what they’ve already learned.

Oddly, most people who say they are interested in radical social change don’t study this literature much, and people in this area don’t much consider radical change. Which seems a shame; these tools are a good foundation for such efforts, and the topic of radical change has long attracted wide interest. I’ve tried to apply these tools to consider big change, such as with my futarchy proposal.

I’m pleased to report that two experts in social design have a new book, Radical Markets: Uprooting Capitalism and Democracy for a Just Society:

The book reveals bold new ways to organize markets for the good of everyone. It shows how the emancipatory force of genuinely open, free, and competitive markets can reawaken the dormant nineteenth-century spirit of liberal reform and lead to greater equality, prosperity, and cooperation. … Only by radically expanding the scope of markets can we reduce inequality, restore robust economic growth, and resolve political conflicts. But to do that, we must replace our most sacred institutions with truly free and open competition—Radical Markets shows how.

While I applaud the ambition of the book, and hope to see more like it, the five big proposals of the book vary widely in quality. They put their best feet forward, and it goes downhill from there. Continue reading "Radical Markets" »

GD Star Rating
loading...
Tagged as: , , ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
loading...
Tagged as: , ,

Hazlett’s Political Spectrum

I just read The Political Spectrum by Tom Hazlett, which took me back to my roots. Well over three decades ago, I was inspired by Technologies of Freedom by Ithiel de Sola Pool. He made the case both that great things were possible with tech, and that the FCC has mismanaged the spectrum. In grad school twenty years ago, I worked on FCC auctions, and saw mismanagement behind the scenes.

When I don’t look much at the details of regulation, I can sort of think that some of it goes too far, and some not far enough; what else should you expect from a noisy process? But reading Hazlett I’m just overwhelmed by just how consistently terrible is spectrum regulation. Not only would everything have been much better without FCC regulation, it actually was much better before the FCC! Herbert Hoover, who was head of the US Commerce Department at the time, broke the spectrum in order to then “save” it, a move that probably helped him rise to the presidency:

“Before 1927,” wrote the U.S. Supreme Court, “the allocation of frequencies was left entirely to the private sector . . . and the result was chaos.” The physics of radio frequencies and the dire consequences of interference in early broadcasts made an ordinary marketplace impossible, and radio regulation under central administrative direction was the only feasible path. “Without government control, the medium would be of little use because of the cacaphony [sic] of competing voices.”

This narrative has enabled the state to pervasively manage wireless markets, directing not only technology choices and business decisions but licensees’ speech. Yet it is not just the spelling of cacophony that the Supreme Court got wrong. Each of its assertions about the origins of broadcast regulation is demonstrably false. ..

The chaos and confusion that supposedly made strict regulation necessary were limited to a specific interval—July 9, 1926, to February 23, 1927. They were triggered by Hoover’s own actions and formed a key part of his legislative quest. In effect, he created a problem in order to solve it. ..

Radio broadcasting began its meteoric rise in 1920–1926 under common-law property rules .. defined and enforced by the U.S. Department of Commerce, operating under the Radio Act of 1912. They supported the creation of hundreds of stations, encouraged millions of households to buy (or build) expensive radio receivers. .. The Commerce Department .. designated bands for radio broadcasting. .. In 1923, .. [it] expanded the number of frequencies to seventy, and in 1924, to eighty-nine channels .. [Its] second policy was a priority-in-use rule for license assignments. The Commerce Department gave preference to stations that had been broadcasting the longest. This reflected a well-established principle of common law. ..

Hoover sought to leverage the government’s traffic cop role to obtain political control. .. In July 1926, .. Hoover announced that he would .. abandon Commerce’s powers. .. Commerce issued a well-publicized statement that it could no longer police the airwaves. .. The roughly 550 stations on the air were soon joined by 200 more. Many jumped channels. Conflicts spread, annoying listeners. Meanwhile, Commerce did nothing. ..

Now Congress acted. An emergency measure .. mandated that all wireless operators immediately waive any vested rights in frequencies ..  the Radio Act … provided for allocation of wireless licenses according to “public interest”.  .. With the advent of the Federal Radio Commission in 1927, the growth of radio stations—otherwise accommodated by the rush of technology and the wild embrace of a receptive public—was halted. The official determination was that less broadcasting competition was demanded, not more.

That was just the beginning. The book documents so so much more that has gone very wrong. Even today, vast valuable spectrum is wasted broadcasting TV signals that almost no one uses, as most everyone gets cable TV. In addition,

The White House estimates that nearly 60 percent of prime spectrum is set aside for federal government use .. [this] substantially understates the amount of spectrum it consumes.

Sometimes people argue that we need an FCC to say who can use which spectrum because some public uses are needed. After all, not all land can be private, as we need public parks. Hazlett says we don’t use a federal agency to tell everyone who gets which land. Instead the public buys general land to create parks. Similarly, if the government needs spectrum, it can buy it just like everyone else. Then we’d know a lot better how much any given government action that uses spectrum is actually costing us.

Is the terrible regulation of spectrum an unusual case, or is most regulation that bad? One plausible theory is that we are more willing to believe that a strange complex tech needs regulating, and so such things tend to be regulated worse. This fits with nuclear power and genetically modified food, as far as I understand them. Social media has so far escaped regulation because it doesn’t seem strange – it seems simple and easy to understand. It has complexities of course, but behind the scenes.

GD Star Rating
loading...
Tagged as: ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,

Merkle’s Futarchy

My futarchy paper, Shall We Vote on Values But Bet on Beliefs?, made public in 2000 but officially “published” in 2013, has gotten more attention lately as some folks talk about using it to govern blockchain organizations. In particular, Ralph Merkle (co-inventor of public key cryptography) has a recent paper on using futarchy within “Decentralized Autonomous Organizations.”

I tried to design my proposal carefully to avoid many potential problems. But Merkle seems to have thrown many of my cautions to the wind. So let me explain my concerns with his variations.

First, I had conservatively left existing institutions intact for Vote on Values; we’d elect representatives to oversee the definition and measurement of a value metric. Merkle instead has each citizen each year report a number in [0,1] saying how well their life has gone that year:

Annually, all citizens are asked to rank the year just passed between 0 and 1 (inclusive). .. it is intended to provide information about one person’s state of satisfaction with the year that has just passed. .. Summed over all citizens and divided by the number of citizens, this gives us an annual numerical metric between 0 and 1 inclusive. .. An appropriately weighted sum of annual collective welfares, also extending indefinitely into the future, would then give us a “democratic collective welfare” metric. .. adopting a discount rate seems like at least a plausible heuristic. .. To treat their death: .. ask the person who died .. ask before they die. .. [this] eliminates the need to evaluate issues and candidates. The individual citizen is called upon only to determine whether the year has been good or bad for themselves. .. We’ve solved .. the need to wade through deceptive misinformation.

Yes, it could be easy to decide how your last year has gone, even if it is harder to put that on a scale from worst to best possible. But reporting that number is not your best move here! Your optimal strategy here is almost surely “bang-bang”, i.e., reporting either 0 or 1. And you’ll probably want to usually give the same consistent answer year after year. So this is basically a vote, except on “was this last year a good or a bad year?”, which in practice becomes a vote on “has my life been good or bad over the last decades.” Each voter must pick a threshold where they switch their vote from good to bad, a big binary choice that seems ripe for strong emotional distortions. That might work, but it is pretty far from what voters have done before, so a lot of voter learning is needed.

I’m much more comfortable with futarchy that uses value metrics tied to the reason an organization exists. Such as using the market price of investment to manage an investment, attendance to manage a conference, or people helped (& how much) to manage a charity.

If there are too many bills on the table at anyone one time for speculators to consider, many bad ones can slip through and have effects before bills to reverse them can be proposed and adopted. So I suggested starting with a high bar for bills, but allowing new bills to lower the bar. Merkle instead starts with a very low bar that could be raised, and I worry about all the crazy bills that might pass before the bar rises:

Initially, anyone can propose a bill. It can be submitted at any time. .. At any time, anyone can propose a new method of adopting a bill. It is evaluated and put into effect using the existing methods. .. Suppose we decided that it would improve the stability of the system if all bills had a mandatory minimum consideration period of three months before they could be adopted. Then we would pass a bill modifying the DAO to include this provision.

I worried that the basic betting process could bias the basic rules, so I set basic voting and process rules off limits from bet changes, and set an independent judiciary to judge if rules are followed. Merkle instead allows this basic bet process to change all the rules, and all the judges, which seems to me to risk self-supporting rule changes:

How the survey is conducted, and what instructions are provided, and the surrounding publicity and environment, will all have a great impact on the answer. .. The integrity of the annual polls would be protected only if, as a consequence, it threatened the lives or the well-being of the citizens. .. The simplest approach would be to appoint, as President, that person the prediction market said had the highest positive impact on the collective welfare if appointed as President. .. Similar methods could be adopted to appoint the members of the Supreme Court.

Finally, I said explicitly that when the value formula changes then all the previous definitions must continue to be calculated to pay off past bets. It isn’t clear to me that Merkle adopts this, or if he allows the bet process to change value definitions, which also seems to me to risk self-supporting changes:

We leave the policy with respect to new members, and to births, to our prediction market. .. difficult to see how we could justify refusing to adopt a policy that accepts some person, or a new born child, as a member, if the prediction market says the collective welfare of existing members will be improved by adopting such a policy. .. Of greater concern are changes to the Democratic Collective Welfare metric. Yet even here, if the conclusion reached by the prediction market is that some modification of the metric will better maximize the original metric, then it is difficult to make a case that such a change should be banned.

I’m happy to see the new interest in futarchy, but I’m also worried that sloppy design may cause failures that are blamed on the overall concept instead of on implementation details. As recently happened to the DAO concept.

GD Star Rating
loading...
Tagged as: , ,

Lognormal Jobs

I often meet people who think that because computer tech is improving exponentially, its social impact must also be exponential. So as soon as we see any substantial social impact, watch out, because a tsunami is about to hit. But it is quite plausible to have exponential tech gains translate into only linear social impact. All we need is a lognormal distribution, as in this diagram:

LogNormalJobs

Imagine that each kind of jobs that humans do requires a particular level of computing power in order for computers to replace humans on that job. And imagine that these job power levels are distributed lognormally.

In this case an exponential growth in computing power will translate into a linear rate at which computers displace humans on jobs. Of course jobs may clump along this log-computing-power axis, giving rise to bursts and lulls in the rate at which computers displace jobs. But over the long run we could see a relatively steady rate of job displacement even with exponential tech gains. Which I’d say is roughly what we do see.

Added 3am: Many things are distributed lognormally.

GD Star Rating
loading...
Tagged as: , ,

Investors Not Barking

Detective: “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Detective: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

We’ve seen several centuries of continuing economic growth enabled by improving tech (broadly conceived). Some of that tech can be seen as “automation” where machines displace humans on valued tasks.

The economy has consistently found new tasks for humans, to make up for displaced tasks. But while the rate of overall economic growth has be relatively steady, we have seen fluctuations in the degree of automation displacement in any given industry and region. This has often led to local anxiety about whether we are seeing the start of a big trend deviation – are machines about to suddenly take over most human jobs fast?

Of course so far such fears have not yet been realized. But around the year 2000, near the peak of the dotcom tech boom, we arguably did see substantial evidence of investors suspecting a big trend-deviating disruption. During a big burst of computer-assisted task displacement, the tech sector should soon see a big increase in revenue. So anticipating a substantial chance of such a burst justifies bigger stock values for related firms. And this graph of the sector breakdown of the S&P500 over the last few decades shows that investors then put their money where their mouths were regarding such a possible big burst:

S&P500breakdown

In the last few years, we’ve heard another burst of anxiety about an upcoming big burst of automation displacing humans on tasks. It is one of our anxieties du jour. But if you look at the right side of the graph above you’ll note that are not now seeing a boom in the relative value of tech sector stocks.

We see the same signal if we look at majors chosen by college graduates. A big burst of automation not only justifies bigger tech stock values, it also justifies more students majoring in tech. And during the dotcom boom we did see a big increase in students choosing to major in computer science. But we have not seen such an increase during the last decade.

So the actions of both stock investors and college students suggest that they do not believe we are at substantial risk of a big burst of automation soon. These dogs are not barking. Even if robots taking jobs is what lots of talking heads are talking about. Because talking heads aren’t putting their money, or their time, where their mouths are.

GD Star Rating
loading...
Tagged as: , ,

Old Prof Vices, Virtues

Tyler on “How bad is age discrimination in academia?”:

I believe it is very bad, although I do not have data.

I started my Ph.D. at the age of 34, and Tyler hired me here at GMU at the age of 40. So by my lights Tyler deserves credit for overcoming the age bias. Tyler doesn’t discuss why this bias might exist, but a Stanford history prof explained his theory to me when I was in my early 30s talking to him about a possible PhD. He said that older students are known for working harder and better, but also for being less pliable: they have more of their own ideas about what is interesting and important.

I think that fits with what I’ve heard from others, and have seen for myself, including in myself. People complain that academia builds too little on “real world” experience, and that disciplines are too insular. And older students help with that. But in fact the incentive for each prof in picking students isn’t to solve the wider problems with academia. It is instead to expand an empire by creating intellectual clones of him or herself. And for that selfish goal, older students are worse. My mentors likely feel this way about me, that I worked hard and did interesting stuff, but I was not a good investment for expanding their legacy.

Interestingly this explanation is somewhat the opposite of the usual excuses for age bias in Silicon Valley. There the usual story is that older people won’t take as many risks, and that they aren’t as creative. But the complaint about older Ph.D.s is exactly that they take too many risks, and that they are too creative. If only they would just do what they are told, and copy their mentors, then their hard work and experience could be more valued.

I find it hard to believe that older workers change their nature this much between tech and academia. Something doesn’t add up here. And for what its worth, I’ve been personally far more impressed by the tech startups I’ve known that are staffed by older folks.

GD Star Rating
loading...
Tagged as: , ,