Category Archives: Social Science

Who Cheers The Referee?

Almost no one, that's who.  Oh folks may cheer a ruling favoring their side, but that is hardly the same.  On average referees mostly get complaints from all sides.   Who asks for their autograph, or wants to grow up to be one?

Similarly who cheers the officials who keep elections fair, or the teachers who grade fairly?  Inspiring stories are told of folks who win legal cases or music competitions, but what stories are told of fair neutral judges who make sure the right people win?  After all, competition stories are not nearly as inspiring with arbitrary or corrupt judges.  Oh judges are sometimes celebrated, but for supporting the "good" side, not for making a fair neutral evaluation.

Sure we give lip service to fairness, and we may sincerely believe that we care about it, but that mostly expresses itself as sincere outrage when our side is treated unfairly.  We usually can't be bothered to pay much attention to help settle disputes in which we have little stake.  So if you want to be celebrated and gain social support, take sides.  But if you want to instead do the most good for the world, consider pulling the rope sideways instead of joining the tug-o-war.  Consider being a neutral arbitrator, or better yet consider developing better systems of arbitration and evaluation.

Continue reading "Who Cheers The Referee?" »

GD Star Rating
loading...
Tagged as: , , ,

Are AIs Homo Economicus?

Eliezer yesterday:

If I had to pinpoint a single thing that strikes me as “disagree-able” about the way Robin frames his analyses, it’s that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they’re less expensive to build/teach/run.  … The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.

Lots of folks complain about economists; believers in peak oil, the gold standard, recycling, electric cars, rent control, minimum wages, tariffs, and bans on all sorts of things complain about contrary economic analyzes.  Since compared to most social scientists economists use relatively stark mathy models, the usual complaint is that our models neglect relevant factors, and make false assumptions.

But of course we must neglect most everything, and make false assumptions, to have tractable models; the question in each context is what neglected factors and false assumptions would most mislead us.

It is odd to hear complaints that economic models assume too much humanity; the usual complaint is the opposite.  Unless physicists have reasons to assume otherwise, they usually assume masses are at points, structures are rigid, surfaces are frictionless, and densities are uniform.  Similarly, unless economists have reasons to be more realistic in a context, they usually assume people are identical, risk-neutral, live forever, have selfish material stable desires, know everything, make no mental mistakes, and perfectly enforce every deal.  Products usually last one period or forever, are identical or infinitely varied, etc.

Continue reading "Are AIs Homo Economicus?" »

GD Star Rating
loading...
Tagged as: ,

Beware Hockey Stick Plans

Eliezer yesterday:

So really, the whole hard takeoff analysis of “flatline or FOOM” just ends up saying, “the AI will not hit the human timescale keyhole.” From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it’s not so radical a prediction, is it?

Dotcom business plans used to have infamous “hockey stick” market projections, a slow start that soon “fooms” into the stratosphere.  From “How to Make Your Business Plan the Perfect Pitch“:

Keep your market-size projections conservative and defend whatever numbers you provide. If you’re in the very early stages, most likely you can’t calculate an accurate market size anyway. Just admit that. Tossing out ridiculous hockey-stick estimates will only undermine the credibility your plan has generated up to this point.

Imagine a business trying to justify its hockey stock forecast:

We analyzed a great many models of product demand, considering a wide range of possible structures and parameter values (assuming demand never shrinks, and never gets larger than world product).   We found that almost all these models fell into two classes, slow cases where demand grew much slower than the interest rate, and fast cases where it grew much faster than the interest rate.  In the slow class we basically lose most of our million dollar investment, but in the fast class we soon have profits of billions.  So in expected value terms, our venture is a great investment, even if there is only a 0.1% chance the true model falls in this fast class.

Continue reading "Beware Hockey Stick Plans" »

GD Star Rating
loading...
Tagged as: , ,

Test Near, Apply Far

Companies often ask me if prediction markets can forecast distant future topics.  I tell them yes, but that is not the place to test any doubts about prediction markets. To vet or validate prediction markets, you want topics where there will be many similar forecasts over a short time, with other mechanisms making forecasts that can be compared. 

If you came up with an account of the cognitive processes that allowed Newton or Einstein to make their great leaps of insight, you would want to look for where that or related accounts applied to more common insight situations.  An account that only applied to a few extreme "geniuses" would be much harder to explore, since we know so little about those few extreme cases.

If you wanted to explain the vast voids we seem to see in the distant universe, and you came up with a theory of a new kind of matter that could fill that void, you would want to ask where nearby one might find or be able to create that new kind of matter.  Only after confronting this matter theory with local data would you have much confidence in applying it to distant voids.

It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot.  When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near.  Far is just the wrong place to try new things.

There are a bazillion possible abstractions we could apply to the world.  For each abstraction, the question is not whether one can divide up the world that way, but whether it "carves nature at its joints", giving useful insight not easily gained via other abstractions.  We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby. 

GD Star Rating
loading...
Tagged as: , ,

The Complexity Critique

Razib at Gene Expression:

I have always been struck by starkness of human hypocrisy and its incongruity in the face of avowed beliefs. … Sin is common, and human weakness in the face of contradiction the norm.  Mens’ hearts are easily divided, and simultaneously sincere in their inclinations. … All this leads to the point that I believe far too many of those of us who wish to comprehend human nature scientifically lack a basic grasp of it intuitively. … Many atheists simply lack a deep understanding of what drives people to be religious, and that our psychological model of those who believe in gods is extremely suspect. The "irrationality" and "contradiction" of human behavior may be rendered far more systematically coherent simply by adding more parameters into the model. … When I engage with these sorts of issues with readers of Overcoming Bias or Singularitarians my suspicions become even stronger because I see in some individuals an even greater lack of fluency in normal cognition than my own. … My point is that understanding human nature is not a matter of fitting humanity to our expectations and wishes, but modeling it as it is, whether one thinks that that nature is irrational or not within one’s normative framework.

This frustrating critique is frustrating common: "You’re wrong because your model is too simple.  But I’m not going to tell you what your model is missing, at least not in a clear enough way to help you improve your model."  Yes of course almost all our models are too simple.  We all know that; what we don’t know is exactly what complexities we should be adding to our models.  And for the record I was a teen cultist and my dad and brother were/are church pastors.

For social scientists I think there is actually an advantage in having a less powerful intuitive understanding of human behavior – it helps us notice things that need explaining.  To want to explain particular human behaviors you first need to see them as puzzling, and people with powerful intuitive understandings can predict behavior so well intuitively that they often don’t notice behaviors that are at odds with our best theories. 

GD Star Rating
loading...
Tagged as:

New Best Game Theory

The latest American Economic Review says lab experiments have crowned a new best game theory:

Experiments on 12 [completely mixed 2 x 2-]games, 6 constant sum games, and 6 nonconstant sum games were run with 12 independent subject groups for each constant sum game and 6 independent subject groups for each nonconstant sum game. Each independent subject group consisted of four players 1 and four players 2, interacting anonymously over 200 periods with random matching. The comparison of the five theories shows that the order of performance from best to worst is as follows: impulse balance equilibrium, payoff-sampling equilibrium, action-sampling equilibrium, quantal response equilibrium, Nash equilibrium.

So what is this winner, impulse balance equilibrium?

Continue reading "New Best Game Theory" »

GD Star Rating
loading...
Tagged as:

Incomplete Analysis

In reading the comments on my variance-induced test bias post, I was reminded of a big bias loophole in social science: judging when an analysis is complete "enough."  We usually have some status quo policies, and some analyses relevant to those policies.  Each analysis tends to favor some possible policies relative to others, but alas most every analysis is incomplete, leaving out relevant considerations. 

Now we do need to assess which analyzes are most relevant to any given policy question, but at least here experts can, when analyses are similar enough, usually bring to bear some relatively "objective" criteria.  When we ask if the relevant analyses are good "enough" to justify action, however, we can usually appeal only to much weaker standards of evaluation. 

Continue reading "Incomplete Analysis" »

GD Star Rating
loading...
Tagged as: ,

Touching Vs. Understanding

On the plane home last week I talked to a sharp Yale historian, and realized we devote far more resources to preserving historical sites, and to making history available via museums, than we do to funding professional historians to make sense of it all.  That reminded me of complaints that NASA spends far more on sending instruments into space to collect data than it does on funding scientists to analyze that data.  In both cases we collect far more data than ever gets carefully analyzed.

Now part of the explanation must be that the public can more easily see historical sites, museums, and space instruments than historians and data analysts.  But that doesn’t seem to me a sufficient explanation – I suspect we are also just more interested in touching the past, and in touching space, than in understanding either.  We talk about understanding because that is a modern applause light, but really we just like to touch exotic things.  The more we can touch, the further is our reach, and the more important and powerful we must be.  I wonder how much more this explains.

Added: We have related desires to see art and sport events in person, up close, and to meet and touch celebrities in person. 

GD Star Rating
loading...
Tagged as: ,

Britain Was Too Small

Apparently the relevant unit in the last singularity was Western Europe; Britain was too small to support the industrial revolution by itself.  From the May American Economic Review

This paper sets out to test, with a formal CGE model, the role of trade with the New World, and trade itself, in explaining the growth of productivity and income in Britain in the Industrial Revolution era.  We find, to our surprise, that the New World was only very modestly important, even by the 1850s. Had the Americas not existed, or not been discovered, the effects on productivity and income growth would have been perceptible, but the Industrial Revolution would have looked much as it does to us today. There were ready substitutes for the cotton, sugar, corn and timber of the New World in Eastern Europe, the Near East and South Asia.

However, had all trade barriers been substantial – if, say, a victorious France cut off Britain’s access to European, African and Asian raw materials and markets – then the history of the Industrial Revolution Britain would have been very different. British incomes per person, instead of rising by 45% between the 1760s and 1850s would have risen by a mere 5%. The total factor productivity growth rate, already a modest 0.4% per year, would have fallen to 0.22% per year.

The magnitude, scale and transforming power of the Industrial Revolution lay in its unification of technological advance with the military power that generated easy British access to the markets of Europe, the Americas, the Near East and the Far East.

Added:  In sum, the unit of the industrial revolution seems to have been Western Europe, so Britain who started it did not gain much relative to the rest of Western Europe, but Western Europe gained more substantially relative to outsiders.

GD Star Rating
loading...
Tagged as:

Lazy Lineup Study

Thursday’s Nature suggests standard police line-ups may not be so bad:

The traditional US procedure is familiar to any fan of television cop shows. Witnesses are presented with a line-up that includes both the suspect and a number of innocent people, or foils’, and are asked to identify the perpetrator. In the early 1990s … then attorney-general, Janet Reno, invited experts to form a working group to address how this method could be improved. … The working group’s most important recommendation was that line-ups should be conducted in a double-blind fashion, so that neither the witness nor the official overseeing the procedure would know who the suspect was. The group also recommended that the suspect and foils be presented sequentially rather than simultaneously, and that the witness be asked to make a decision after each one rather than waiting until the end. …

In 2003, the Illinois State Police commissioned its own study to test line-ups under real-world, field conditions …, with the cooperation of two psychologists and three of the state’s police departments … [they] spent a year conducting some 700 eyewitness identifications. Some of the procedures were non-blind and simultaneous; the rest were double-blind and sequential. Both conditions were a mix of live line-ups and photo arrays. The team found that the double-blind, sequential technique produced higher rates of foil picks – that is, clear errors – and lower rates of suspect picks than the traditional, nonblind line-up. …

Continue reading "Lazy Lineup Study" »

GD Star Rating
loading...
Tagged as: