Author Archives: Robin Hanson

There’s Always Subtext

Our new book, The Elephant in the Brain, argues that hidden motives drive much of our behavior. If so, then to make fiction seem realistic, those who create it will need to be aware of such hidden motives. For example, back in 2009 I wrote:

Impro, a classic book on theatre improvisation, convincingly shows that people are better actors when they notice how status moves infuse most human interactions. Apparently we are designed to be very good at status moves, but to be unconscious of them.

The classic screenwriting text Story, by Robert McKee, agrees more generally, and explains it beautifully:

Text means the sensory surface of a work of art. In film, it’s the images onscreen and the soundtrack of dialogue, music, and sound effects. What we see. What we hear. What people say. What people do. Subtext is the life under that surface – thoughts and feelings both known and unknown, hidden by behavior.

Nothing is what it seems. This principle calls for the screen-writer’s constant awareness of the duplicity of life, his recognition that everything exists on at least two levels, and that, therefore, he must write a simultaneous duality: First, he must create a verbal description of the sensory surface of life, sight and sound, activity and talk. Second, he must create the inner world of conscious and unconscious desire, action and reaction, impulse and id, genetic and experiential imperatives. As in reality, so in fiction: He must veil the truth with a living mask, the actual thoughts and feelings of characters behind their saying and doing.

An old Hollywood expression goes “If the scene is about what the scene is about, you’re in deep shit.” It means writing “on the nose,” writing dialogue and activity in which a character’s deepest thoughts and feelings are expressed by what the character says and does – writing the subtext directly into the text.

Writing this, for example: Two attractive people sit opposite each other at a candlelit table, the lighting glinting off the crystal wineglasses and the dewy eyes of the lovers. Soft breezes billow the curtains. A Chopin nocturne plays in in the background. The lovers reach across the table, touch hands, look longingly in each others’ eyes, say, “I love you, I love you” .. and actually mean it. This is an unactable scene and will die like a rat in the road. ..

An actor forced to do the candlelit scene might attack it like this: “Why have these people done out of their way to create this movie scene? What’s with the candlelight, soft music, billowing curtains? Why don’t they just take their pasta to the TV set like normal people? What’s wrong with this relationship? Because isn’t that life? When do the candles come out? When everything’s fine? No. When everything’s fine we take our pasta to the TV set like normal people. So from that insight the actor will create a subtext. Now as we watch, we think: “He says he loves her and maybe he does, but look, he’s scared of losing her. He’s desperate.” Or from another subtext: “He says he loves her, but look, he’s setting her up for bad news. He’s getting ready to walk out.”

The scene is not about what it seems to be about. Its about something else. And it’s that something else – trying to regain her affection or softening her up for the barkeep – that will make the scene work. There’s always a subtext, and inner life that contrasts with or contradicts the text. Given this, the actor will create a multi layered work that allows us to see through the text to the truth that vibrates beyond the eyes, voice and gestures of life. ..

In truth, it’s virtually impossible for anyone, even the insane, to fully express what’s going on inside. No matter how much we wish to manifest our deepest feelings, they elude us. We never fully express the truth, for in fact we rarely know it. .. Nor does this mean that we can’t write powerful dialogue in which desperate people try to them the truth. It simply means that the most passionate moments must conceal an even deeper level. ..

Subtext is present even when a character is alone. For if no one else is watching us, we are. We wear masks to thinner our true selves from ourselves. Not only do individuals wear masks, but institutions do as well and hire public relations experts to keep them in place. (pp.252-257)

Added 17Sep: More on subtext of sound and images:

The power of an organized return of images is immense, as variety and repetition drive the Image System to the seat of the audiences unconscious. Yet, and most important, a film’s poetics must be handled with virtual invisibility and go consciously unrecognized. (p.402) ..

Symbolism is powerful, more powerful than most realize, as long as it bypasses the conscious mind and slips into the unconscious. As it does while we dream. The use of symbolism follows the same principle as scoring a film. Sound doesn’t need cognition, and music can deeply affect us when we’re unconscious of it. In the same way, symbols touch us and move us – as long as we don’t recognize them as symbolic. Awareness of a symbol turns it into a neutral, intellectual curiosity, powerless and virtually meaningless. (p.407)

GD Star Rating
loading...
Tagged as: ,

Marching Markups

This new paper by De Locker and Eeckhout will likely be classic:

We document the evolution of markups based on firm-level data for the US economy since 1950. Initially, markups are stable, even slightly decreasing. In 1980, average markups start to rise from 18% above marginal cost to 67% now. .. Increase in average market power .. can account for .. slowdown in aggregate output. .. The rise in market power is consistent with seven secular trends in the last three decades.

Yes, US public firms have only 1/3 of US jobs, and an even smaller fraction of the world’s. Even so, this is a remarkably broad result. I’d feel a bit better if I understood why their firm-level simple aggregation of total sales divided by total variable costs (their Figure B.5a) gives only a 26% markup today, but I’ll give them the benefit of the doubt for now. (And that figure was 12% in 1980, so it has also risen a lot.) Though see Tyler’s critique.

The authors are correct that this can easily account for the apparent US productivity slowdown. Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal.

Accepting the main result that markups have been marching upward, the obvious question to ask is: why? But first, let’s review some clues from the paper. First, while industries with smaller firms tend to have higher markups, within each small industry, bigger firms have larger markups, and firms with higher markups pay higher dividends.

There has been little change in output elasticity, i.e., the rate at which variable costs change with the quantity of units produced. (So this isn’t about new scale economies.) There has also been little change in the bottom half of the distribution of markups; the big change has been a big stretching in the upper half. Markups have increased more in larger industries, and the main change has been within industries, rather than a changing mix of industries in the economy. The fractions of income going to labor and to tangible capital have fallen, and firms respond less than they once did to wage changes. Firm accounting profits as a fraction of total income have risen four fold since 1980.

These results seem roughly consistent with a rise in superstar firms:

If .. changes advantage the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms with high profits and a low share of labor in firm value-added and sales. .. aggregate labor share will tend to fall. .. industry sales will increasingly concentrate in a small number of firms.

Okay, now lets get back to explaining these marching markups. In theory, there might have been a change in the strategic situation. Perhaps price collusion got easier, or the game became less like price competition and more like quantity competition. But info tech should have both made it easier for law enforcement to monitor collusion, and also made the game more like price competition. Also, anti-trust just can’t have much effect on these small-firm industries. So I’m quite skeptical that strategy changes account for the main effect here. The authors see little overall change in output elasticity, and so I’m also pretty skeptical that there’s been any big overall change in the typical shape of demand or cost curves.

If, like me, you buy the standard “free entry” argument for zero expected economic profits of early entrants, then the only remaining possible explanation is an increase in fixed costs relative to variable costs. Now as the paper notes, the fall in tangible capital spending and the rise in accounting profits suggests that this isn’t so much about short-term tangible fixed costs, like the cost to buy machines. But that still leaves a lot of other possible fixed costs, including real estate, innovation, advertising, firm culture, brand loyalty and prestige, regulatory compliance, and context specific training. These all require long term investments, and most of them aren’t tracked well by standard accounting systems.

I can’t tell well which of these fixed costs have risen more, though hopefully folks will collect enough data on these to see which ones correlate strongest with the industries and firms where markups have most risen. But I will invoke a simple hypothesis that I’ve discussed many times, which predicts a general rise of fixed costs: increasing wealth leading to stronger tastes for product variety. Simple models of product differentiation say that as customers care more about getting products nearer to their ideal point, more products are created and fixed costs become a larger fraction of total costs.

Note that increasing product variety is consistent with increasing concentration in a smaller number of firms, if each firm offers many more products and services than before.

Added 25Aug: Karl Smith offers a similar, if more specific, explanation.

GD Star Rating
loading...
Tagged as: , ,

My TED/TEDx Talks

My TED video on Age of Em is finally out:

As you can see, the TED folks do great at video editing. I’m hoping this will attract more viewers than the 67K of my first TEDx talk on ems 4 years ago, and the 48K of my TEDx on the Great Filter 3 years ago. As I said back in May:

The TED community seems to come about as as close as I can realistically expect to my ideal religion.

I also have a great TEDx video on Elephant in the Brain: recorded just 3 weeks later:

Added 25 Aug: 280K views of my TED video in the first day!

GD Star Rating
loading...
Tagged as:

How Social Is Reason?

In their book The Enigma of Reason, out last April, Hugo Mercier and Dan Sperber have written an important book on an important but neglected topic. They argue first that humans, and only humans, have a brain module that handles abstract reasoning:

Reason is indeed [a] specialized [module of inference]; it draws interpretive inferences just about reasons.

Second, they argue for a new theory of reason. Previously, scholars have focused on reason in the context of a sincere attempt to infer truth:

Most of the philosophers and psychologists we talked to endorse some version of the dominant intellectualist view: they see reason as a means to improve individual cognition and arrive one one’s own at better beliefs and decisions. Reason, they take for granted, should be objective and demanding.

In this view, observed defects in human reasoning are to be seen as understandable errors, accommodations to complexity, and minor corrections due to other minor selection pressures. Sincerely inferring truth is the main thing. Mercier and Sperber, however, argue that one social correction isn’t at all minor: reason is better understood in the context of a speaker who is trying to a persuade a listener who sincerely seeks to infer truth. Speaker “biases” are just what one should expect from speakers seeking to persuade:

In our interactionist account, reason’s bias and laziness aren’t flaws; they are features that help reason fulfill its function. People are biased to find reasons that support their point of view because this is how they can justify their action and convince others to share their beliefs.

Mercier and Sperber do successfully show that many “defects” in human reasoning can be understand as arising from insincere speaker motives. However, just as we can question speakers motives, we can also question listener motives. Couldn’t listeners also be also concerned about the social consequences of their inferences? Listeners might want to agree to show submission or favor to a speaker, and ignore or disagree to show disfavor or dominance. And listeners may want to agree with what they expect others to agree with, to sound reasonable and to show loyalty.

Mercier and Sperber seem to be aware of many such listener motives:

Luria used problems that were logically trivial but .. unfamiliar:

In the Far North, where there is snow, all bears are white. Novaya Zemlya is in the Far North. What color are bears there?

When unschooled peasants were interviewed, the vast majority seemed at a loss, providing answers such as “There are many sorts of bears.” .. His experiments were successfully replicated with several unschooled populations. .. In small-scale populations, people are very cautious with their assertions, only stating a position when they have a good reason to. .. Only a fool with dare to make such a statement .. she could not appropriately defend. ..

Because of the intense pressure to maintain social harmony, “the Japanese are not trained to argue and reason.” ..

The overlap between the proper and the actual domain of reasoning remains partial. There are false negatives: people in a dominant position or in the vocal majority might pay little attention to the opinion of subordinates or minorities and fail to detect disagreements. There are also false positives; either clashes of ideas that occur between third parties with whom we are not in a position to interact … or clashes of ideas within oneself. ..

Throughout the centuries, smart physicians felt justified in making decisions that cost patients their lives. .. If they were eager to maintain their reputation, they were better off bleeding their patients. ..

You might be ill-judged by people who are not aware of this argument, and you might not have the opportunity to explain the reason for your choice.

Mercier and Sperber treat these various effects as minor corrections that don’t call into question their basic theory, even as they complain that the traditional view of reason doesn’t attend enough to certain effects that their theory explains. But it seems to me that in addition to explaining some effects as due to insincere speaker motives, a better theory of reason could also explain other effects as due to insincere listener motives.

In the modern world, while we usually give lip service to the idea that we are open to letting anyone persuade us on anything with a good argument, by the time folks get to be my age they know that such openings are in fact highly constrained. For example, early on in my relation with my wife she declared that as I was better at arguing, key decisions were just not going to be made on the basis of better arguments.

Even in academia, little value is placed on simple relevant arguments, compared to demonstrating the mastery of difficult tools. And in our larger world, the right to offer what looks like a critical argument is usually limited to the right sort of people who have the right sort of relation in the right sort of contexts. Even then people know to avoid certain kinds of arguments, even if those arguments would in fact persuade if pushed hard enough. And most speakers know they are better off arguing for what listeners want to believe, rather than for unpleasant conclusions.

Mercier and Sperber suggest that arguing used to be different, and better:

When a collective decision has to be made in a modern democracy, people go to the voting booth. Our ancestors sat down and argued – at least if present-day small-scale societies are any guide to the past. In most such societies across the globe, when a grave problem threatens the group, people gather, debate, and work out a solution that most find satisfying. ..

When the overriding concern of people who disagree is to get things right, argumentation should not only make them change their mind, it should make them change their mind for the best.

I’d like to believe that argumentation was all different and better back then, with careful speakers well disciplined by sincere listeners. But I’m skeptical. I expect that the real selection pressures on our abilities to reason have always reflected these complex social considerations, for both speakers and listeners. And we won’t really understand human reasoning until we think through what reasoning behaviors respond well to these incentives.

GD Star Rating
loading...
Tagged as: ,

Why Ethnicity, Class, & Ideology? 

Individual humans can be described via many individual features that are useful in predicting what they do. Such features include gender, age, personality, intelligence, ethnicity, income, education, profession, height, geographic location, and so on. Different features are more useful for predicting different kinds of behavior.

One kind of human behavior is coalition politics; we join together into coalitions within political and other larger institutions. People in the same coalition tend to have features in common, though which exact features varies by time and place. But while in principle the features that describe coalitions could vary arbitrarily by time and place, we in actual fact see more consistent patterns.

Now when forming groups based on shared features, it make senses to choose features that matter more in individual lives. The more life decisions a feature influences, the more those who share this feature may plausibly share desired policies, policies that their coalition could advocate. So you might expect political coalitions to be mostly based on individual features that are very useful for predicting individual behavior.

You might be right about small scale coalitions, such as cliques, gangs, and clubs. And you might even be right about larger scale political coalitions in the ancient world. But you’d be wrong about our larger scale political coalitions today. While there are often weak correlations with such features, larger scale political coalitions are not mainly based on the main individual features of gender, age, etc. Instead, they are more often based on ethnicity, class, and “political ideology” preferences. While ideology is famously difficult to characterize, and it does vary by time and place, it is also somewhat consistent across time and space.

In this post, I just want to highlight this puzzle, not solve it: why are these the most common individual features on which large scale political coalitions are based? Yes, in some times and places ethnicity and class matter so much that they strongly predict individual behavior. But even when they don’t matter much for policy preferences, they are still often the basis of coalitions. And why is political ideology so attractive a basis for coalitions, when it matters so little in individual lives?

I see two plausible types of theories here. One is a theory of current functionality; somehow these features actually do capture the individual features that best predict member positions on typical issues. Another is a theory of past functionality; perhaps in long-past forager environments, something like these features were the most relevant. I now lean toward this second type of theory.

GD Star Rating
loading...
Tagged as: ,

Ems in Walkaway

Some science fiction (sf) fans have taken offense at my claim that non-fiction analysis of future tech scenarios can be more accurate than sf scenarios, whose authors have other priorities. So I may periodically critique recent sf stories with ems for accuracy. Note that I’m not implying that such stories should have been more accurate; sf writing is damn hard work and its authors juggle a many difficult tradeoffs. But many seem unaware of just how often accuracy is sacrificed.

The most recent sf I’ve read that includes ems is Walkaway, by “New York Times bestselling author” Cory Doctorow, published back in April:

Now that anyone can design and print the basic necessities of life—food, clothing, shelter—from a computer, there seems to be little reason to toil within the system. It’s still a dangerous world out there, the empty lands wrecked by climate change, dead cities hollowed out by industrial flight, shadows hiding predators animal and human alike. Still, when the initial pioneer walkaways flourish, more people join them.

The emotional center of Walkaway is elaborating this vision of a decentralized post-scarcity society trying to do without property or hierarchy. Though I’m skeptical, I greatly respect attempts to describe such visions in more detail. Doctorow, however, apparently thinks we economists make up bogus math for the sole purpose of justifying billionaire wealth inequality. Continue reading "Ems in Walkaway" »

GD Star Rating
loading...
Tagged as: ,

Organic Prestige Doesn’t Scale

Some parts of our world, such as academia, rely heavily on prestige to allocate resources and effort; individuals have a lot of freedom to choose topics, and are mainly rewarded for seeming impressive to others. I’ve talked before about how some hope for a “Star Trek” future where most everything is done that way, and I’m now reading Walkaway, outlining a similar hope. I was skeptical:

In academia, many important and useful research problems are ignored because they are not good places to show off the usual kinds of impressiveness. Trying to manage a huge economy based only on prestige would vastly magnify that inefficiency. Someone is going to clean shit because that is their best route to prestige?! (more)

Here I want to elaborate on this critique, with the help of a simple model. But first let me start with an example. Imagine a simple farming community. People there spend a lot of time farming, but they must also cook and sew. In their free time they play soccer and sing folk songs. As a result of doing all these things, they tend to “organically” form opinions about others based on seeing the results of their efforts at such things. So people in this community try hard to do well at farming, cooking, sewing, soccer, and folk songs.

If one person put a lot of effort into proving math theorems, they wouldn’t get much social credit for it. Others don’t naturally see outcomes from that activity, and not having done much math they don’t know how to judge if this math is any good. This situation discourages doing unusual things, even if no other social conformity pressures are relevant.

Now let’s say that in a simple model. Let there be a community containing people j, and topic areas i where such people can create accomplishments aij. Each person j seeks a high personal prestige pj = Σi vi aij, where vi is the visibly of area i. They also face a budget constraint on accomplishment, Σi aij2 ≤ bj. This assumes diminishing returns to effort in each area.

In this situation, each person’s best strategy is to choose aij proportional to vi. Assume that people tend to see the areas where they are accomplishing more, so that visibility vi is proportional to an average over individual aij. We now end up with many possible equilibria having different visibility distributions. In each equilibria, for all individuals j and areas i,k we have the same area ratios aij / akj = Vi/ Vk.

Giving individuals different abilities (such as via a budget constraint Σi aij2 / xij ≤ bj) could make individual choose somewhat different accomplishments, but the same overall result obtains. Spillovers between activities in visibility or effort can have similar effects. Making some activities be naturally more visible might push toward those activities, but there could still remain many possible equilibria.

This wide range of equilibria isn’t very reassuring about the efficiency of this sort of prestige. But perhaps in a small foraging or farming community, group selection might over a long run push toward an efficient equilibria where the high visibility activates are also the most useful activities. However, larger societies need a strong division of labor, and with such a division it just isn’t feasible for everyone to evaluate everyone else’s specific accomplishments. This can be solved either by creating a command and status hierarchy that assigns people to tasks and promotes by merit, or by an open market with prestige going to those who make the most money. People often complain that doing prestige in these ways is “inauthethnic”, and they prefer the “organic” feel of personally evaluating others’ accomplishments. But while the organic approach may feel better, it just doesn’t scale.

In academia today, patrons defer to insiders so much regarding evaluations that disciplines become largely autonomous. So economists evaluate other economists based mostly on their work in economics. If someone does work both in economics and also in aother area, they are judged mostly just on their work in economics. This penalizes careers working in multiple disciplines. It also suggests doubts on if different disciplines get the right relative support – who exactly can be trusted to make such a choice well?

Interestingly, academic disciplines are already organized “inorganically” internally. Rather than each economist evaluating each other economist personally, they trust journal editors and referees, and then judge people based on their publications. Yes they must coordinate to slowly update shared estimates of which publications count how much, but that seems doable informally.

In principle all of academia could be unified in this way – universities could just hire the candidates with the best overall publication (or citation) record, regardless of in which disciplines they did what work. But academia hasn’t coordinated to do this, nor does it seem much interested in trying. As usual, those who have won by existing evaluation criteria are reluctant to change criteria, after which they would look worse compared to new winners.

This fragmented prestige problem hurts me especially, as my interests don’t fit neatly into existing groups (academic and otherwise). People in each area tend to see me as having done some interesting things in their area, but too little to count me as high status; they mostly aren’t interested in my contributions to other areas. I look good if you count my overall citations, for example, but not if you only my citations or publications in each specific area.

GD Star Rating
loading...
Tagged as: , ,

Compare Institutions To Institutions, Not To Perfection

Mike Thicke of Bard College has just published a paper that concludes:

The promise prediction markets to solve problems in assessing scientific claims is largely illusory, while they could have significant unintended consequences for the organization of scientific research and the public perception of science. It would be unwise to pursue the adoption of prediction markets on a large scale, and even small-scale markets such as the Foresight Exchange should be regarded with scepticism.

He gives three reasons:

[1.] Prediction markets for science could be uninformative or deceptive because scientific predictions are often long-term, while prediction markets perform best for short-term questions. .. [2.] Prediction markets could produce misleading predictions due to their requirement for determinable predictions. Prediction markets require questions to be operationalized in ways that can subtly distort their meaning and produce misleading results. .. [3.] Prediction markets offering significant profit opportunities could damage existing scientific institutions and funding methods.

Imagine that you want to travel to a certain island. Some else tells you to row a boat there, but I tell you that a helicopter seems more cost effective for your purposes. So the rowboat advocate replies, “But helicopters aren’t as fast as teleportation, they take longer and cost more when to go longer distances, and you need more expert pilots to fly in worse weather.” All of which is true, but not very helpful.

Similarly, I argue that with each of his reasons, Thicke compares prediction markets to some ideal of perfection, instead of to the actual current institutions it is intended to supplement. Lets go through them one by one. On 1:

Even with rational traders who correctly assess the relevant probabilities, binary prediction markets can be expected to have a bias towards 50% predictions that is proportional to their duration. .. it has been demonstrated both empirically and theoretically .. long-term prediction markets typically have very low trading volume, which makes it unlikely that their prices react correctly to new information. .. [Hanson] envisions Wegener offering contracts ‘to be judged by some official body of geologists in a century’, but this would not have been an effective criterion given the problem of 50%-bias in long-term prediction markets. .. Prediction markets therefore would have been of little use to Wegener.

First a predictable known distortion isn’t a problem at all for forecasts; just invert the distortion to get the accurate forecast. Second, this is much less of an issue in combinatorial markets, where all questions are broken into thousands or more tiny questions, all of which have tiny probabilities, and a global constraint ensures they all add up to one. But more fundamentally, all institutions face the same problem that all else equal, it is easier to give incentives for accurate short term predictions, relative to long term ones. This doesn’t show that prediction markets are worse in this case than status quo institutions. On 2:

Even if prediction markets correctly predict measured surface temperature, they might not predict actual surface temperature if the measured and actual surface temperatures diverge. .. Globally averaged surface air temperature [might be] a poor proxy for overall global temperature, and consequently prediction market prices based on surface air temperature could diverge from what they purport to predict: global warming. .. If interpreting the results of these markets requires detailed knowledge of the underlying subject, as is needed to distinguish global average surface air temperature from global average temperature, the division of cognitive labour promised by these markets will disappear. Perhaps worse, such predictions could be misinterpreted if people assume they accurately represent what they claim to.

All social institutions of science must deal with the facts that there can be complex connections between abstract theories and specific measurements, and that ignorant outsiders may misinterpret summaries. Yes prediction market summaries might mislead some, but then so can grant and article abstracts, or media commentary. No, prediction markets can’t make all such complexities go away. But this hardly means that prediction markets can’t support a division of labor. For example, in combinatorial prediction markets different people can specialize in the connections between different variables, together managing a large Bayesian network of predictions. On 3:

If scientists anticipate that trading on prediction markets could generate significant profits, either due to being subsidized .. or due to legal changes allowing significant amounts of money to be invested, they could shift their attention toward research that is amenable to prediction markets. The research most amenable to prediction markets is short-term and quantitative: the kind of research that is already encouraged by industry funding. Therefore, prediction markets could reinforce an already troubling push toward short-term, application-oriented science. Further, scientists hoping to profit from these markets could withhold salient data in anticipation of using that data to make better informed trades than their peers. .. If success in prediction markets is taken as a marker of scientific credibility, then scientists may pursue prediction-oriented research not to make direct profit, but to increase their reputation.

Again, all institutions work better on short term questions. The fact that prediction markets also work better on short term questions does not imply that using them creates more emphasis on short term topics, relative to using some other institution. Also, every institution of science must offer individuals incentives, incentives which distract them from other activities. Such incentives also imply incentives to withhold info until one can use that info to one’s maximal advantage within the system of incentives. Prediction markets shouldn’t be compared to some perfect world where everyone shares all info without qualification; such worlds don’t exist.

Thicke also mentioned:

Although Hanson suggests that prediction market judges may assign non-binary evaluations of predictions, this seems fraught with problems. .. It is difficult to see how such judgements could be made immune from charges of ideological bias or conflict of interest, as they would rely on the judgement of a single individual.

Market judges don’t have to be individuals; there could be panels of judges. And existing institutions are also often open to charges of bias and conflicts of interest.

Unfortunately many responses to reform proposals fit the above pattern: reject the reform because it isn’t as good as perfection, ignoring the fact that the status quo is nothing like perfection.

GD Star Rating
loading...
Tagged as:

Hazlett’s Political Spectrum

I just read The Political Spectrum by Tom Hazlett, which took me back to my roots. Well over three decades ago, I was inspired by Technologies of Freedom by Ithiel de Sola Pool. He made the case both that great things were possible with tech, and that the FCC has mismanaged the spectrum. In grad school twenty years ago, I worked on FCC auctions, and saw mismanagement behind the scenes.

When I don’t look much at the details of regulation, I can sort of think that some of it goes too far, and some not far enough; what else should you expect from a noisy process? But reading Hazlett I’m just overwhelmed by just how consistently terrible is spectrum regulation. Not only would everything have been much better without FCC regulation, it actually was much better before the FCC! Herbert Hoover, who was head of the US Commerce Department at the time, broke the spectrum in order to then “save” it, a move that probably helped him rise to the presidency:

“Before 1927,” wrote the U.S. Supreme Court, “the allocation of frequencies was left entirely to the private sector . . . and the result was chaos.” The physics of radio frequencies and the dire consequences of interference in early broadcasts made an ordinary marketplace impossible, and radio regulation under central administrative direction was the only feasible path. “Without government control, the medium would be of little use because of the cacaphony [sic] of competing voices.”

This narrative has enabled the state to pervasively manage wireless markets, directing not only technology choices and business decisions but licensees’ speech. Yet it is not just the spelling of cacophony that the Supreme Court got wrong. Each of its assertions about the origins of broadcast regulation is demonstrably false. ..

The chaos and confusion that supposedly made strict regulation necessary were limited to a specific interval—July 9, 1926, to February 23, 1927. They were triggered by Hoover’s own actions and formed a key part of his legislative quest. In effect, he created a problem in order to solve it. ..

Radio broadcasting began its meteoric rise in 1920–1926 under common-law property rules .. defined and enforced by the U.S. Department of Commerce, operating under the Radio Act of 1912. They supported the creation of hundreds of stations, encouraged millions of households to buy (or build) expensive radio receivers. .. The Commerce Department .. designated bands for radio broadcasting. .. In 1923, .. [it] expanded the number of frequencies to seventy, and in 1924, to eighty-nine channels .. [Its] second policy was a priority-in-use rule for license assignments. The Commerce Department gave preference to stations that had been broadcasting the longest. This reflected a well-established principle of common law. ..

Hoover sought to leverage the government’s traffic cop role to obtain political control. .. In July 1926, .. Hoover announced that he would .. abandon Commerce’s powers. .. Commerce issued a well-publicized statement that it could no longer police the airwaves. .. The roughly 550 stations on the air were soon joined by 200 more. Many jumped channels. Conflicts spread, annoying listeners. Meanwhile, Commerce did nothing. ..

Now Congress acted. An emergency measure .. mandated that all wireless operators immediately waive any vested rights in frequencies ..  the Radio Act … provided for allocation of wireless licenses according to “public interest”.  .. With the advent of the Federal Radio Commission in 1927, the growth of radio stations—otherwise accommodated by the rush of technology and the wild embrace of a receptive public—was halted. The official determination was that less broadcasting competition was demanded, not more.

That was just the beginning. The book documents so so much more that has gone very wrong. Even today, vast valuable spectrum is wasted broadcasting TV signals that almost no one uses, as most everyone gets cable TV. In addition,

The White House estimates that nearly 60 percent of prime spectrum is set aside for federal government use .. [this] substantially understates the amount of spectrum it consumes.

Sometimes people argue that we need an FCC to say who can use which spectrum because some public uses are needed. After all, not all land can be private, as we need public parks. Hazlett says we don’t use a federal agency to tell everyone who gets which land. Instead the public buys general land to create parks. Similarly, if the government needs spectrum, it can buy it just like everyone else. Then we’d know a lot better how much any given government action that uses spectrum is actually costing us.

Is the terrible regulation of spectrum an unusual case, or is most regulation that bad? One plausible theory is that we are more willing to believe that a strange complex tech needs regulating, and so such things tend to be regulated worse. This fits with nuclear power and genetically modified food, as far as I understand them. Social media has so far escaped regulation because it doesn’t seem strange – it seems simple and easy to understand. It has complexities of course, but behind the scenes.

GD Star Rating
loading...
Tagged as: ,

Foom Justifies AI Risk Efforts Now

Years ago I was honored to share this blog with Eliezer Yudkowsky. One of his main topics then was AI Risk; he was one of the few people talking about it back then. We debated this topic here, and while we disagreed I felt we made progress in understanding each other and exploring the issues. I assigned a much lower probability than he to his key “foom” scenario.

Recently AI risk has become something of an industry, with far more going on than I can keep track of. Many call working on it one of the most effectively altruistic things one can possibly do. But I’ve searched a bit and as far as I can tell that foom scenario is still the main reason for society to be concerned about AI risk now. Yet there is almost no recent discussion evaluating its likelihood, and certainly nothing that goes into as much depth as did Eliezer and I. Even Bostrom’s book length treatment basically just assumes the scenario. Many seem to think it obvious that if one group lets one AI get out of control, the whole world is at risk. It’s not (obvious).

As I just revisited the topic while revising Age of Em for paperback, let me try to summarize part of my position again here. Continue reading "Foom Justifies AI Risk Efforts Now" »

GD Star Rating
loading...
Tagged as: , ,