Monthly Archives: April 2011

I Talk Wed. At Harvard

Next Wednesday I’ll talk at Harvard business school on “Toward Information Accounting“:

What gets measured, gets done. Today, organizations account in great detail for revenue and the costs of materials and time, but have only crude informal accounting of info contributed to key organizational decisions. Because info cost and value are poorly measured, info production is neglected.

Can we use prediction markets to do better? Imagine speculative betting markets on many key organizational questions, and two key changes in business practice. First, let the division responsible for each decision declare lower-bound estimates of the value of more info on each related question. A division might, for example, declare that 1% lower error in estimating 3rd quarter sales of product X is worth at least $5000. There are standard ways to calculate such info value in specialized situations, such as inventory management.

Second, let trader accounts be denominated in a new “color of money.” Instead of doing zerosum betting, the market for each question would be subsidized at a level matching its declared info value. As a result, the subsidy amounts lost to traders as prices become more accurate would on average correspond to that question’s declared info value. For example, on 3rd quarter sales of product X, its 0.7% lower error might have earned a $3500 subsidy, going to George who gained $2000, Sue who gained $1500, Sam who gained $1000, and Fred who lost $1000.

Given these two new practices, trader account gains could be interpreted as noisy estimates of the info value those accounts transmitted via their trades. Losses could be interpreted as info destruction. Simple statistics applied to the pattern of changes in an account over time could estimate its consistent gains, amid its temporary fluctuations. The total consistent gains for the accounts of a division could be credited to that division in its ordinary cost accounting, while that same amount is debited from the divisions who declared info value on those questions.

When one created an account with an initial cash deposit, and authorized an individual or team to trade that account on specific questions, one would in essence say: “Try to show us that you can consistently add info value here via your trades. We’ve started you out small, but if you can show consistent gains we may give you more to work with. At annual review time we’ll credit your account’s consistent gains (or losses) to you (and your division) as value you transmitted to this organization, to be compared with your time and other costs of participation.”

GD Star Rating
Tagged as: , ,

The SETI Game

When listening for signals from aliens, it isn’t enough to just point an antenna at the sky. One must also choose details like directions, angles, frequencies, bandwidths, pulse widths, and pulse intervals. Apparently most SETI searches assume that for a given signal power density, aliens would pick details to make it as easy as possible for us to detect their signals. So standard SETI searches are optimized for such easily-seen signals. Two excellent papers, published back in July, instead consider what sort of signals would be sent by “beacon” building aliens, who seek to create the maximum possible power density at any given distance away from them.  (One of the authors is SF author Greg Benford.) Such signals are quite different, and most of today’s SETI searches are not very good at seeing them:

Minimizing the cost of producing a desired power density at long range … determines the maximum range of detectability of a transmitted signal. We derive general relations for cost-optimal aperture and power. … Galactic-scale beacons can be built for a few billion dollars with our present technology. Such beacons have narrow “searchlight” beams and short “dwell times” when the beacon would be seen by an alien observer in their sky. … Cost scales only linearly with range R, not as R2. … They will likely transmit at higher microwave frequencies, 10 GHz. The natural corridor to broadcast is along the galactic radius or along the local spiral galactic arm we are in. …

Cost, spectral lines near 1 GHz, and interstellar scintillation favor radiating frequencies substantially above the classic “water hole.” Therefore, the transmission strategy for a distant, cost-conscious beacon would be a rapid scan of the galactic plane with the intent to cover the angular space. Such pulses would be infrequent events for the receiver. Such beacons built by distant, advanced, wealthy societies would have very different characteristics from what SETI researchers seek. … We will need to wait for recurring events that may arrive in intermittent bursts. …

A concept of frugality, economy. … directly contradicts the Altruistic Alien Argument that the beacon builders will be vastly wealthy and make everything easy for us. An omnidirectional beacon, radiating at the entire galactic plane, for example, would have to be enormously powerful and expensive, and so not be parsimonious. … For transmitting time t, receiver detectability scales as t1/2. But at constant power, transmitter cost increases as t, so short pulses are economically smart (cheaper) for the transmitting society. A 1-second pulse sent every 10 minutes to 600 targets would be 1/600 as expensive per target, yet only *1/25 times harder to detect. Interstellar scintillation limits the pulse time to >10-6 s, which is within the range of all existing high-power microwave devices. Such pings would have small information content, which would attract attention to weaker, high-content messages. …

Cost-optimized beacons … can be found by steady searches that watch the galactic plane for times on the scale of years. Of course, SETI literature abounds with consideration of the trade-offs of search strategy (range vs. EIRP vs. pulse vs. continuous (continuous wave, CW) vs. polarization vs. frequency vs. beamwidth vs. integration time vs. modulation types vs. targeted vs. all-sky vs. Milky Way). But, in practice, search dwell times are a few seconds in surveys and 100–200 seconds for targeted searches. Optical searches usually run to minutes. And integration times are long, of order 100 s, so short pulses will be integrated out. …

Behind conventional SETI methods lies the assumption that altruistic beaming societies will send persistent signals. In searches to date, confirmation attempts, when the observer looks back at a target, in practice usually occur days later. Such surveys have little chance of seeing cost-optimized beacons. … Distant, cost-optimized beacons will appear for much less time than as assumed in conventional SETI. Earlier searches have seen pulsed intermittent signals resembling what we (in this paper) think beacons may be like, and may provide useful clues. We should observe the spots in the sky seen in previous work for hints of such activity but over year-long periods. (more)

Of course both the usual assumption that aliens will pay any cost to make a given power density signal easy for us to see, and the new assumption that aliens ignore our costs and merely seek to maximize power density, are both somewhat unsatisfactory. It would be better to model this interaction as a game, where each side has a limited budget and seeks to maximize the probability of at least one successful communication, holding constant the behavior it expects from the other side. Each side would of course also have to integrate over possible locations and budgets for the other side.

I’m very interested in working with (sim, math, or physics) competent folks to more formally model this SETI communication game.

GD Star Rating
Tagged as: , , , ,

Blame Victims For Lies

Whatsoever a man soweth, that shall he also reap.

Since gullible people tend to believe what they are told, other folks are more tempted to lie to them. So if one chooses to be gullible, one must accept a lot of responsibility for the lies one hears. Case in point: voters are greatly responsible for the lies their leaders tell them. A Post book review:

The leaders most likely to lie are precisely those in Western democracies, those whose traditions of democracy perversely push them to mislead the very public that elected them. In fact … leaders tend to lie to their own citizens more often than they lie to each other. In his disheartening yet fascinating book, “Why Leaders Lie,” Mearsheimer offers a treatise on the biggest of big fat lies, breaking down the deceptions the world’s presidents and generals and strongmen engage in — when, why and how they lie, and how effective those falsehoods can be.

First are “inter-state lies,” deceptions aimed at other countries to gain or retain some advantage over them. … Such state-to-state lies are relatively uncommon … and successful ones are even less so. In a world where each state must fend for itself, leaders are unlikely to take each other’s word on serious stuff. … Also, if you lie too often, no one will trust you, so what’s the point?… “Fearmongering” — when leaders cannot convince the public of the threats they foresee and so deceive the people “for their own good” — is far more prevalent and effective. …

Next is the “strategic cover-up,” in which a leader misleads in order to cover up a policy that has gone badly wrong, or to hide a smart but potentially controversial strategy. … National myths fuel solidarity by putting a country’s history in the best possible light. … Liberal lies … are used to justify odious behavior that conflicts with traditional ideals. For example, Winston Churchill and FDR served up a generous helping of deceit when depicting Stalin as a good guy (friendly ol’ “Uncle Joe”) to justify their cooperation with the Soviet leader during World War II. …

Depending on the situation, lies can be “clever, necessary, and maybe even virtuous.” … [But] widespread lying makes it harder for citizens to make good choices in the voting booth. …. And in fragile democracies, pervasive lying can so alienate the public that they are willing to embrace more authoritarian leadership.

Because voters tend to be gullible, politicians lie more to them. Much of that gullibility seems to me to be by choice; people seem to see themselves as good people if they give their leaders the benefit of the doubt.  Then they express righteous indignation if they discover that their leaders lied. But really, they are themselves mostly to blame.

GD Star Rating
Tagged as: ,

What Do I Want To Know?

Reading the novel Lolita while listening to Winston’s Summer, thinking a fond friend’s companionship, and sitting next to my son, all on a plane traveling home, I realized how vulnerable I am to needing such things. I’d like to think that while I enjoy such things, I could take them or leave them. But that’s probably not true. I like to think I’d give them all up if needed to face and speak important truths, but well, that seems unlikely too. If some opinion of mine seriously threatened to deprive me of key things, my subconscious would probably find a way to see the reasonableness of the other side.

So if my interests became strongly at stake, and those interests deviated from honesty, I’ll likely not be reliable in estimating truth. Yet as my interests fade to zero, I also suspect my opinions to be dominated by random weak influences, such as signaling pressures, that also have little to do with truth. My reliability seems contingent on my having atypically good incentives to get it right.

So on what topics do I have good incentives? Of course this is also a subject on which I may have poor incentives for accuracy. If things precious to me depended on my believing I had good incentives, well then I’d believe that, even if untrue. What to do?

It seems my safest place to stand for drawing inferences is on my most robust beliefs about good incentives. And for me, that place is prediction markets. Since prediction markets seem to give robustly good incentives on a rather wide range of topics, I should believe what they say, and think I’d have more reliable beliefs if we had more such markets. I might think we don’t need them much on certain safe topics, because we already have good reliable other ways to estimate such topics. But I just can’t trust such judgements that much – they might also be biased.

Of course I can’t know that I or we will be better off by having more truthful estimates on any particular topic. I might think that on certain topics we’d be better off not knowing. But I can’t trust that judgement greatly – it would be better to rely on prediction markets on this meta question, of what we’d be better off not to know.

Someday hopefully we’ll have many prediction markets, and maybe even futarchies, to guide humanity through the many shoals ahead, including on what we’d do better not to know. Of course we might be mistaken about what we value, and so ask futarchies about the wrong consequences, thus inducing mistakes about what we’d rather not know. So it is especially important to consider the values in which we have the most confidence.

You might argue that your best estimate is that we are in fact seriously mistaken on what we value, so mistaken that we would ask futarchies the wrong questions, and then such markets would mislead us on what we’d be better off not to know. You might instead recommend that we follow your suggestions about what we should know, and what to believe in the absence of the prediction markets you advise against. And well, you might be right. But really, what grounds do you have have for confidence in that set of judgements? Why should we trust your judgement on the good quality of the incentives for your intuitions?

GD Star Rating
Tagged as:

Kling On School

Arnold Kling:

In a hierarchy, signaling respect for the hierarchy is very important. That is another similarity between academia and government, which I have discussed before. That is, part of the process of getting ahead in academia is showing respect for the academic hierarchy.

I think this offers a potential insight into the signaling role of education. It does not just signal intelligence or conscientiousness, which could be signaled more cheaply in other ways. It signals respect for hierarchy. Thus, large organizations will tend to value educational credentials, while small organizations may not need to do so.

There is no cheap alternative to educational credentials if you want to signal respect for hierarchy. … Any attempt to evade the educational credential system inherently signals a lack of respect for hierarchy!

This sounds to me pretty close to my emphasis on school as training kids to accept industry-era levels of overt ranking and dominance, with Bryan Caplan’s emphasis on doing the usual things to avoid seeming weird, since folks that are weird in some ways also tend to be weird in other ways. I’m not convinced folks care that much about your overall respect for hierarchy, but they do care that you go along with their local system, and defer to superiors.

GD Star Rating
Tagged as:

What Aren’t They Thinking?

Eliezer Yudkowsky in September:

Have I ever remarked on how completely ridiculous it is to ask high school students to decide what they want to do with the rest of their lives and give them nearly no support in doing so? Support like, say, spending a day apiece watching twenty different jobs and then another week at their top three choices, with salary charts and projections and probabilities of graduating that subject given their test scores? The more so considering this is a central allocation question for the entire economy?

Katja Grace two days ago:

I’ve been meaning to remark how surprised I am that not even the students themselves seem interested in researching such things, or to even think of it. Similarly for their families. It’s not expensive to phone a few people who are doing your dreamed of career. … Even if you are entering college without knowing what you want to do later, it would probably make sense to at least contact some current students doing your proposed degree … I did neither of these things, I’m not sure why.

This sort of observation really gets my attention, much like seeing little correlation between medicine and health. Such things seriously call into question very standard stories on why we do what do. We say we go to docs to get well, but why if those who go more aren’t more well? We say we study to get better jobs, but why if we won’t bother to study what jobs we should want to get?

My mind is still pondering how best to explain this. A few ideas come to mind, but none are that satisfactory yet. But what most strikes me is at the meta level – few of my academic colleagues seem nearly as bothered by such things as I. So many academics, including economists, study medicine and schooling, and yet they hardly even mention such obvious dramatic puzzles, much less devote themselves to resolving them.

Yes, most academics are careerists, and yes current academic fashions are not on such topics, so I don’t expect academics to devote much precious career efforts to such things. But even academics have some free time, and some innate curiosity. And most have a better than average view of our best data and theories, a vantage point from which such puzzles come into sharper relief. It is almost as if their minds actively turn their mind’s eye away from such to-me striking views.

Note: this meta-observation is yet another example, as it also questions basic stories on why academics do what they do.

GD Star Rating
Tagged as: ,

Rah Arrogance

A month ago I wrote:

Would be innovators must now combine two risky decisions:

  1. What innovative ideas or projects are ripe and promising to purse now?
  2. Who is best placed or skilled to attempt the realization of each idea?

People who pitch project ideas to venture capitalists often focus on convincing them of #1, idea quality, not realizing that if you convince them of that but not #2, your team quality, they will just steal your idea and give it to another better team. Usually they hear from several teams pitching pretty similar concepts, so they are judging mainly on team quality. Knowing this, sophisticated innovators tend to neglect idea quality, and focus on team quality.

Alas, academics similarly pay much more attention to what teams and projects might achieve prestigious publications on currently fashionable topics, than to which topics should be fashionable, for intellectual progress and social value. Individual academic incentives are to publish well on current fashions, predict future fashions, and perhaps to nudge fashion toward topics where they can more easily publish.

There is little academic prestige in arguing that currently fashionable topics aren’t especially socially useful. Those who now publish in such areas know who they are and will oppose you, while those who would publish more in new areas, if they became fashionable, mostly don’t know who they are.

When other academics visit GMU econ, one of the most consistent and striking differences I notice is how few of them will say much about how their research fits into a bigger picture of what academia or the world needs overall. Even when directly asked. I’m proud that my colleagues usually have much more to say here. Why are we different? Perhaps we are arrogant, thinking highly of our own contributions. If so, I salute such arrogance. With it, at least some academics think of the big picture.

GD Star Rating
Tagged as:

On Berserkers

Adrian Kent is getting a little publicity for posting his ’05 paper on the berserker hypothesis, “that evolution has very significantly suppressed cosmic conspicuity”, i.e., that many aliens are out there, but hiding from each other. He advocates taking the hypothesis seriously, but doesn’t actually argue for the coherence of any particular imagined scenario. Kent’s excuse:

It would be very difficult to produce a model that convincingly predicts the likelihoods and spatial distributions of the various strategies, since the answer surely depends on many unknowns.

He instead just claims:

The hypothesis is certainly not logically inconsistent and it seems to me not entirely implausible.

So what then is Kent’s contribution? Apparently it is a bunch of strategy fragments, i.e., strategy issues that aliens might consider in various related situations. It is not clear that these are much of a contribution, at least relative to the many contained in related science fiction novels. But, well, here they are: Continue reading "On Berserkers" »

GD Star Rating
Tagged as: , , ,

Avoiding Death Is Far

Avoiding death is a primary goal of medicine. Avoiding side effects of treatment is a secondary goal.  So it makes sense that in a far mode doctors emphasize avoiding death, but in nearer mode avoiding side effects matters more:

The study asked more than 700 primary-care doctors to choose between two treatment options for cancer and the flu — one with a higher risk of death, one with a higher risk of serious, lasting complications. In each of the two scenarios, doctors who said they’d choose the deadlier option for themselves outnumbered those who said they’d choose it for their patients. … Two hypothetical situations were presented: one involved choosing between two types of colon cancer surgery; the less deadly option’s risks included having to wear a colostomy bag and chronic diarrhea. The other situation involved choosing no treatment for the flu, or choosing a made-up treatment less deadly than the disease but which could cause permanent paralysis. (more; HT Tyler)

As other people are far compared to yourself, advice about them is more far. Similar effects are seen elsewhere:

One study asked participants if they would approach an attractive stranger in a bar if they noticed that person was looking at them. Many said no, but they would give a friend the opposite advice. Saying “no” meant avoiding short-term pain — possible rejection by an attractive stranger — but also missing out on possible long-term gain — a relationship with that stranger.

Since fear of being laughed at for doing something weird is also near, far mode also seems the best place to get people to favor cryonics. A best case: folks recommending that other people sign up at some future date. How could we best use that to induce concrete action?

Added 11p: Katja offers a plausible alternative theory.

GD Star Rating
Tagged as: , ,

Two-Faced Brains

Although human language allowed egalitarian rules whose uniform enforcement would have greatly reduced the advantages to big-brain conniving, humans had the biggest brains of all to unequally evade such rules. (more)

As with most lying or self-deception, homo hypocritus faces a serious implementation problem: how to keep the lies it tells separate from the “real” beliefs on which it acts. Since brains tend to be liberal with interconnections, there is a real risk of cross-talk between contradictory sets of opinions; lies may infect beliefs, and beliefs may infect lies.

I’ve previously discussed one solution: have the different sets of opinions apply to different topics. For example, hold socially-acceptable opinions on far topics, where the personal consequences of actions tend to be smaller, and keep more realistic opinions on near topics, where such consequences tend to be larger. Yes there’s a risk others may notice that you change opinions without good reason as items move from near to far or far to near, but that may be a relatively small price to pay.

A different solution is to have two distinct processing centers, each highly-connected internally, but with only modest between-center connections. One center would manage a coherent set of lies, while the other managed a coherent set of true beliefs. And in fact real brains have exactly this architecture! Left and right brains are highly connected internally, but only modestly connected to each other. Does the left brain manage a coherent set of overt opinions, while the right brain manages a coherent set of covert opinions? Consider:

  1. In all vertebrates left brains tend to control routine behavior (e.g. feeding) while right brains tend to respond to unusual events (e.g. fight/flight).
  2. Left brains tend to initiate actions, via positive feelings, while right brains tend to inhibit actions, via negative feelings.
  3. Compared to other primates, left vs. right human brains differ a lot more in function.
  4. The left human brain manages language’s literal quotably-overt syntax, vocabulary, and semantics, while the right brain handles language’s less-socially-verifiable tone, accent, metaphor, allegory, and ambiguity.
  5. Split brain patients show that left brains are adept at making up respectable explanations for arbitrary right brain behavior.
  6. Right brains tend to be used more in crafting lies, and they can read subtle emotion clues better.
  7. Left brain damage tends to distort behavior in more obvious and understandable ways.
  8. Left brains emphasize decision-making, fact retrieval, numbers, and careful sequenced acts like throwing, while right brains emphasize art, music, spatial manipulation, and recognizing of shapes, patterns, and faces.

It seems that in most animals, left brains tend to manage and initiate actions within the current mode, while right brains watch in the background for patterns and reasons to veto current actions and switch modes. In humans, it seems the current-action-sequencer brain half was recruited to focus more on managing overt rule-following language, decisions, and actions, ready to explain away any apparent rule-violations. The less-introspectively-accessible pattern-recognizing background-watcher brain half, in contrast, was apparently recruited to focus on harder-to-testify-on-and-so-more-easily-covert meaning, opinion, and communication, including art and music.

I’m not saying that overt vs. covert human beliefs map exactly to human left vs. right brains, any more than socially-useful vs. action-practical beliefs map exactly onto far vs. near beliefs. I’m just suggesting that human brain design took pre-existing animal brain structures, such as near vs. far modes and left vs. right brain splits, and recruited them to the task of managing the uniquely human task of hypocrisy: simultaneously espousing and evading rules. In particular, the left-right brain split become an important tool for minimizing undesirable leakage between the overt rule-following images we present to others, and the cover rule-evading actions and communication which better achieve our real ends.

More quotes:

The left hemisphere is specialized not only for the actual production of speech sounds but also for the imposition of syntactic structure on speech and for much of what is called semantics – comprehension of meaning.  The right hemisphere , on the other hand, doesn’t govern spoken words but seems to be concerned with more subtle aspects of language such as nuances of metaphor, allegory and ambiguity. (Ramachandran, quoted in TMHH p56)

No other [vertebrate] species consistently prefers the same hand for certain skilled actions. … The human brain is distinguished from the brains of the great apes by an extraordinary extent of lateralization of function. (more)

GD Star Rating
Tagged as: , ,