Monthly Archives: May 2014

Days Of Our Lives

Oedipus famously answered this riddle:

What goes on four feet in the morning, two feet at noon, and three feet in the evening?

The answer: people crawl when babies, walk as adults, and use a cane when old. It seems natural to divide lives into three parts: young, middle, and old. But where exactly should the boundaries fall? One tempting approach comes from the facts that in the US today lifespans average about 29000 days, and people typically marry and have kids at about 10000 days. So maybe we should split life into the first, second, and third 10000 days.

If we split life into 5000 days units, we get:

  • 0 days; 0 years – Birth
  • 5000 days; 13.7 years – Mid-puberty
  • 10000 days; 27.4 years – First marriage & kids
  • 15000 days; 41.1 years – Start to notice body decline
  • 20000 days; 54.8 years – Near kids’ first marriage & kids, own peak of relative income, productivity, 90% still alive
  • 25000 days; 68.5 years – Near when most retire, 75% still alive
  • 30000 days; 82.1 years – Typical death age, 42% still alive
  • 35000 days; 95.8 years – Only 4% still alive

Note that 5000 days is near the doubling time of the world economy.

In my life, I married at 10250, had my first kid at 11500, started grad school again at 12400, started at GMU at 14600, and was tenured at 16540. And today I am 20,000 days old, within a few days of all my kids being employed college graduates. So a lot happened to me in that third 5000 days, and I now enter the last third of a typical lifespan, with expected declining (but hardly zero) relative productivity. Of course if cryonics works I might live lots longer.

GD Star Rating
Tagged as: , ,

Disagreement Is Far

Yet more evidence that it is far mental modes that cause disagreement:

Recruiting a sample of Americans via the internet, they polled participants on a set of contentious US policy issues, such as imposing sanctions on Iran, healthcare and approaches to carbon emissions. One group was asked to give their opinion and then provide reasons for why they held that view. This group got the opportunity to put their side of the issue, in the same way anyone in an argument or debate has a chance to argue their case.

Those in the second group did something subtly different. Rather that provide reasons, they were asked to explain how the policy they were advocating would work. They were asked to trace, step by step, from start to finish, the causal path from the policy to the effects it was supposed to have.

The results were clear. People who provided reasons remained as convinced of their positions as they had been before the experiment. Those who were asked to provide explanations softened their views, and reported a correspondingly larger drop in how they rated their understanding of the issues. (more; paper; HT Elliot Olds)

The question “why” evokes a far mode while “how” which evokes a near mode.

GD Star Rating
Tagged as: ,

Reparations As Law

There has been a lot of talk lately about race-based reparations, initiated by this Atlantic article. (See also here, here, here.) I’m not a lawyer, but I do teach Graduate Law & Econ, and the discussion I’ve seen on reparations has ignored key legal issues. So let me raise some of those issues here.

The argument for reparations is based on the very solid well-accepted principle that when A harms B, A should compensate B, both to help B and to discourage future A’s from acting similarly. But over the centuries we’ve collected many other legal principles which limit the scope of application of this basic legal principle.

For example, we usually require that a specific person B identify a specific person A, and offer clear evidence of a particular clear harm that B suffered, relative to some other state that B had a right to reasonably expect. We also require a clear causal path between A’s acts and B’s harm, a path that A could have reasonably foreseen. We usually require public notice about legal prohibitions, we forbid double jeopardy and retroactive rules, and we impose statutes of limitations to limit the delay between act and claim.

Each of these limitations no doubt prevents some Bs from getting compensation from some As, and thus fails to discourage related As from causing related harms. But these limitations are usually seen as net gains because they prevent fake-Bs from using the legal system to extract gains from not-actually-As, which would reduce the perceived legitimacy of the whole legal system due to a perception that such fake cases were common.

Now it is actually not obvious to me that all these limitations on law are net gains. I can see the arguments for allowing hearsay evidence, emotional harms, double jeopardy, retroactive rules, no statutes of limitation, and taking compensation from non-A folks that As care about. That is, I can imagine situations where each of these limitation violations might usefully help to discourage As from hurting Bs.

Our limitations on law have so far mostly prevented people from using the legal process to obtain race-based reparations. After all, cash reparations for US slavery would react to a broad varied pattern of centuries-old harm by transferring from folks distantly and varyingly related to As to others distantly and varyingly related to Bs. Such transfers could only very crudely track the actual pattern of cause and harm. So new policies of race-based reparations would in effect embody many new exceptions to our usual limitations on legal suits. And they would create precedents for future exceptions, making it easier to obtain further reparations based on race, gender, and many other factors.

So regarding race-based reparations, what I most want to hear is a general principled discussion about the pluses and minuses of our usual limitations on law. Yes, we may have imposed overly strict limits. And yes, the legitimacy of the legal system can also be reduced when everyone knows of big harms the law didn’t address. But still, we need to identify principles by which we could make exceptions to the usual limitations.

Yes, one simple principle might be to give big compensation whenever the chattering classes nod sagely enough and say loudly enough that yes it is the right thing to do. But it would be nice to hear concrete arguments on why this approach tends to avoid the usual problems that the limitations on law are said to be there to avoid. Might it be better to create a whole new system of reparation courts that operate according to new legal principles?

Of course in signaling terms, one’s willingness to throw out all the usual legal precautions to endorse race-based reparations can signal exceptional devotion to the race cause. But is this really a path we want to go down, competing to outdo each other in our eagerness to toss out our usual legal protections in order to signal our devotion to various causes?

GD Star Rating
Tagged as: , ,

Big Signals

Between $6 and $9 trillion dollars—about 8% of annual world-wide economic production—is currently being spent on projects that individually cost more than $1 billion. These mega-projects (including everything from buildings to transportation systems to digital infrastructure) represent the biggest investment boom in human history, and a lot of that money will be wasted. …

Over the course of the last fifteen years, [Flyvbjerg] has looked at hundreds of mega-projects, and he found that projects costing more than $1 billion almost always face massive cost overruns. Nine out of ten projects faces a cost overrun, with costs 50% higher than expected in real terms not unusual. …

In fact, the number of mega-projects completed successfully—on time, on budget, and with the promised benefits—is actually too small for Flyvbjerg to determine why they succeeded with any statistical validity. He estimates that only one in a thousand mega-projects fit that criteria. (more; paper)

You can probably throw most big firm mergers into this big inefficient project pot.

There’s a simple signaling explanation here. We like to do big things, as they make us seem big. We don’t want to be obvious about this motive, so we pretend to have financial calculations to justify them. But we are purposely sloppy about those calculations, so that we can justify the big projects we want.

It would be possible to make prediction markets that accurately told us on average that these financial calculations are systematically wrong. That could enable us to reject big projects that can’t be justified by reasonable calculations. But the people initiating these projects don’t want that, so it would have to be outsiders who set up these whistleblowing prediction markets. But alas as with most whistleblowers, the supply of these sort of whistleblowers is quite limited.

GD Star Rating
Tagged as: , ,

First Person Em Shooter

Jesse Galef:

It’s The Matrix meets Braid: a first-person shooter video game “where the time moves only when you move.” You can stare at the bullets streaking toward you as long as you like, but moving to dodge them causes the enemies and bullets to move forward in time as well. The game is called SUPERHOT … it struck me: this might be close to the experience of an emulated brain housed in a regular-sized body.

Jesse asked for my reaction. I said:

Even better would be to let the gamer change the rate at which game-time seems to move, to have a limited gamer-time budget to spend, and to give other non-human game characters a similar ability.

Jesse riffed:

It would be more consistent to add a “mental cycle” budget that ran down at a constant rate from the gamer’s external point of view. I don’t know about you, but I would buy that game! (Even if a multi-player mode would be impossible.)

Let’s consider this in more detail. There’d be two plausible scenarios:

Brain-In-Body Shooter – The em brain stays in a body. Here changing brain speeds would be accomplished by running the same processors faster or slower. In this case, assuming reversible computing hardware, the em brain computing cost for each subjective second would be linear in brain speed; the slower the world around you moved, the more you’d pay per gamer second. This would be an energy cost, to come out of the same energy budget you used to move your body, fire weapons, etc. There would also probably be a heat budget – you’d have some constant rate at which cooling fluids flow to remove heat, and the faster your mind ran the faster heat would accumulate to raise your temperature, and there’d be some limit to the temperature your hardware would tolerate. Being hot might make your body more visible to opponents. It would hard for a video game to model the fact that if your body is destroyed, you don’t remember what happened since your last backup.

Brain-At-Server Shooter – The em brain runs on a server and tele-operates a body. Here switching brain speeds would usually be accomplished by moving the brain to run on more or fewer processors at the server. In this case, em brain computing cost would be directly proportional to subjective seconds, though there may be a switching cost to pay each time you changed mental speeds. This cost would come out of a financial budget of money to pay the server. One might also perhaps allow server processors to temporarily speed up or slow down as with the brain-in-body shooter. There’d be a serious risk of opponents breaking one’s net connection between body and brain, but when your body is destroyed at least you’d remember everything up to that point.

To be able to switch back and forth between these modes, you’d need a very high bandwidth connection and time enough to use it lots, perhaps accomplished at a limited number of “hard line” connection points.

Not that I think shooter situations would be common in an em world. But If you want to make a realistic em shooter, these would be how.

GD Star Rating
Tagged as: ,

SciCast Pays Out Big!

When I announced SciCast in January, I said we couldn’t pay participants. Alas, many associated folks are skeptical of paying because they’ve heard that “extrinsic” motives just don’t work well relative to “intrinsic” motives. No need to pay folks since what really matters is if they feel involved. This view is quite widespread in academia and government.

But, SciCast will finally do a test:

SciCast is running a special! For four weeks, you can win prizes on some days of the week:
• On Wednesdays, win a badge for your profile.
• On Fridays, win a $25 Amazon Gift Card.
• On Tuesdays, win both a badge and a $25 Amazon Gift Card.
On each prize day 60 valid forecasts and comments made that day will be randomly selected to win (limit of $575 per person).
Be sure to use SciCast from May 26 to June 20!

Since we’ve averaged fewer than 60 of these activities per day, rewarding 60 random activities is huge! Either activity levels will stay the same and pretty much every action on those days will get a big reward, or we’ll get lots more activities on those days. Either you or science will win! 🙂

So if you or someone you know might be motivated by a relevant extrinsic or intrinsic reward, tell them about our SciCast special, and have them come be active on matching days of the week. We now have 473 questions on science and technology, and you can make conditional forecasts on most of them. Come!

Added 21May: SciCast is mentioned in this Nature article.

GD Star Rating
Tagged as: ,

Robot Econ in AER

In the May ‘014 American Economic Review, Fernald & Jones mention that having computers and robots replace human labor can dramatically increase growth rates:

Even more speculatively, artificial intelligence and machine learning could allow computers and robots to increasingly replace labor in the production function for goods. Brynjolfsson and McAfee (2012) discuss this possibility. In standard growth models, it is quite easy to show that this can lead to a rising capital share—which we intriguingly already see in many countries since around 1980 (Karabarbounis and Neiman 2013)—and to rising growth rates. In the limit, if capital can replace labor entirely, growth rates could explode, with incomes becoming infinite in finite time.

For example, drawing on Zeira (1998), assume the production function is


Suppose that over time, it becomes possible to replace more and more of the labor tasks with capital. In this case, the capital share will rise, and since the growth rate of income per person is 1/(1 − capital share ) × growth rate of A, the long-run growth rate will rise as well.6


Of course the idea isn’t new; but apparently it is now more respectable.

GD Star Rating
Tagged as: , ,

Em Econ @ Yale Thursday

The Yale Technology & Ethics study group hosts about one talk a month on various futurist topics. Amazingly, I was their very first speaker when the group started in 2002. And this Thursday I’ll return to talk on the same subject:

The Age of Em: Social Implications of Brain Emulations

4:15-6:15pm, May 22, Yale ISPS, 77 Prospect St (corner of Prospect & Trumbull), Rm A002.

The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in the next century. I attempt a broad synthesis of standard academic consensus, including in business and social science, in order to outline a baseline scenario set modestly far into a post-em-transition world. I consider computer architecture, energy use, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war.

My ’02 talk was controversial; Thursday’s talk will likely be well. All are welcome.

Added 28May: Audio, slides.

GD Star Rating
Tagged as: , ,

Jones, Beckstead, & I

Nick Beckstead talked with Garett Jones and I on long run consequences of growth. One point is worth emphasizing: if long run growth matters more than today’s suffering, directly helping those suffering today is unlikely to be the best strategy. From Beckstead’s summary:

What are the long-run consequences of helping people in the developing world, e.g. through donating to GiveDirectly?

If the argument for doing this is that it helps with long-run growth, it’s implausible. It seems very unlikely that donations to GiveDirectly are the best way to speed up economic growth. Improvements in the institutions that hold back innovation would seem more plausible.

Programs like GiveDirectly may have some indirect effects on governance, which could in turn have
effects on long-run growth. For example, people who are suffering less because they are less poor might vote better. We should not assume, in general, that any way of helping people has [predictable] long-run consequences on growth. … [Also,] sending resources from high-growth nations to low-growth nations would be bad for long-term growth. (more)

GD Star Rating
Tagged as: ,

Sam Wilson Podcast

Sam Wilson and I did a podcast for his series, on near-far, em econ, and related topics.

One topic that came up briefly deserves emphasis: robustness can be very expensive.

Imagine I told you to pack a bag for a trip, but I wouldn’t tell you to where. The wider the set of possibilities you needed to handle, the bigger and more expensive your bag would have to be. You might not need a bag at all if you knew your destination was to stay inside one of the hundred largest airports. But you’d need a big bag if you might go anywhere on the surface of the Earth. You’d need a space-suit if you might go anywhere in the solar system, and if you might go anywhere within the Sun, well we have no bag for that.

Similarly, it sounds nice to say that because the future can be hard to predict, we should seek strategies that are robust to many different futures. But the wider the space of futures one seeks to be robust against, the most expensive that gets. For example, if you insist on being ready for an alien invasion by all possible aliens, we just have no bag for that. The situation is almost as bad if you say we need to give explicit up-front-only instructions to a computer that will overnight become a super-God and take over the world.

Of course if those are the actual situations you face, then you must do your best, and pay any price, even if extinction is your most likely outcome. But you should think carefully about whether these are likely enough bag-packing destinations to make it worth being robust toward them. After all, it can be very expensive to pack a spacesuit for a beach vacation.

(There is a related formal result in learning theory: it is hard to learn anything without some expectations about the kind of world you are learning about.)

GD Star Rating
Tagged as: ,