Monthly Archives: September 2013

Dark Pain, Dark Joy

We tend to neglect things we cannot see. We focus on visible (Baryonic) matter in the universe, but there is about twenty times as much dark matter and energy that we know almost nothing about. We focus on brain activity which engages the surrounding world, but about twenty times as much brain energy is used by brains at “rest” and apparently doing nothing.

Pain is probably like this too. For some kinds of pain we are very aware, and make sure others around us are aware too. But for other kinds of pain, we don’t let others know, and are often are in denial to ourselves. There may be lots of dark pain around that we rarely see.

Why do we hide and deny pain? Some pain makes us look bad. We’d look weak to complain of pains that many folks put up with without complaining. And when there are norms about what we should want or not want, we can show norm violations by showing that we deeply want things that we should not, or don’t want things that we should.

Aunt Hilda might really bug you when she visits, but you are supposed to love her. A lack of praise from colleagues might really hurt, but you aren’t supposed to be so self-centered. Some norm-violating pain might not so much make you look bad, as make others feel obligated to visibly disapprove, which would then cause problems.

You might think that dark pain doesn’t matter if we have repressed it from our consciousness, since only conscious pain matters. But consciousness isn’t either or, it is a matter of degree, and repressed pain can infect our mood and feelings in many indirect ways. You might think folks in much pain would seek therapy, so there can’t be many of them. But people seek therapy mainly when they feel dysfunctional; those who still function with lots of pain may just solider on.

If most folks have twenty times as much pain as they show, and live lives of quiet desperation, does this make their lives not worth living? Would it be better if they had never existed? Hardly. In addition to dark pain, there may also be dark joy.

Dark joys could be those that make us look bad, or those that violate norms. We can get illicit joy from being acknowledged as high status, or from submitting to those we think worthy of dominating us. We can get joy from the pain and suffering of our rivals. We can enjoy foods that aren’t good for us, or enjoy just being lazy and neglectful of things to which we are supposed to pay attention.

So does dark joy cancel dark pain, adding up to lives about as worthwhile in the dark as they seem in the light? I just don’t know. But it sure seems an important question. As is the question of which lives around us actually have more net joy over pain. To answer such questions, we’ll need to dig deeper into our self-deceptions, and shine light on things usually dark. Seems a noble quest to me. Just don’t expect people to like you for illuminating the things they keep dark.

GD Star Rating
loading...
Tagged as: ,

1/6 of US Deaths From Hospital Errors

I don’t post on medicine much lately, because my attention has been elsewhere. But this looks too important not to mention:

In 1999, the Institute of Medicine published the famous “To Err Is Human” report, … reporting that up to 98,000 people a year die because of mistakes in hospitals. The number was initially disputed, but is now widely accepted by doctors and hospital officials — and quoted ubiquitously in the media. In 2010, the Office of Inspector General for Health and Human Services said that bad hospital care contributed to the deaths of 180,000 patients in Medicare alone in a given year.

Now comes a study in the current issue of the Journal of Patient Safety that says the numbers may be much higher — between 210,000 and 440,000 patients each year who go to the hospital for care suffer some type of preventable harm that contributes to their death, the study says.

That would make medical errors the third-leading cause of death in America, behind heart disease, which is the first, and cancer, which is second. …

James based his estimates on the findings of four recent studies that identified preventable harm suffered by patients – known as “adverse events” in the medical vernacular – using use a screening method called the Global Trigger Tool, which guides reviewers through medical records, searching for signs of infection, injury or error. Medical records flagged during the initial screening are reviewed by a doctor, who determines the extent of the harm.

In the four studies, which examined records of more than 4,200 patients hospitalized between 2002 and 2008, researchers found serious adverse events in as many as 21 percent of cases reviewed and rates of lethal adverse events as high as 1.4 percent of cases.

By combining the findings and extrapolating across 34 million hospitalizations in 2007, James concluded that preventable errors contribute to the deaths of 210,000 hospital patients annually.

That is the baseline. The actual number more than doubles, James reasoned, because the trigger tool doesn’t catch errors in which treatment should have been provided but wasn’t, because it’s known that medical records are missing some evidence of harm, and because diagnostic errors aren’t captured.

An estimate of 440,000 deaths from care in hospitals “is roughly one-sixth of all deaths that occur in the United States each year.” (more; source)

GD Star Rating
loading...
Tagged as:

The ‘What If Failure?’ Taboo

Last night I heard a  group of smart pundits and wonks discuss Tyler Cowen’s new book Average Is Over. This book is a sequel to his last, The Great Stagnation, where he argued that wage inequality has greatly increased in rich nations over the last forty years, and especially in the last fifteen years. In this new book, Tyler says this trend will continue for the next twenty years, and offers practical advice on how to personally navigate this new world.

Now while I’ve criticized Tyler for overemphasizing automation as a cause of this increased wage inequality, I agree that most of the trends he discusses are real, and most of his practical advice is sound. But I can also see reasonable grounds to dispute this, and I expected the pundits/wonks to join me in debating that. So I was surprised to see the discussion focus overwhelmingly on if this increased inequality was acceptable. Didn’t Tyler understand that losers might be unhappy, and push the political system toward redistribution and instability?

Tyler quite reasonably said yes this change might not be good overall, and yes there might well be more redistribution, but it wouldn’t change the overall inequality much. He pointed out that most losers might be pretty happy with new ways to enjoy more free time, that our last peak of instability was in the 60’s when inequality was at a minimum, that since we have mostly accepted increased inequality for forty years it is reasonable to expect that to continue for another twenty, and that over history inequality has had only a weak correlation with redistribution and instability.

None of which seemed to dent the pundit/wonk mood. They seemed to hold fast to a simple moral principle: when a future change is framed as a problem which we might hope our political system to solve, then the only acceptable reason to talk about the consequences of failing to solve that problem is to scare folks into trying harder to solve it. If you instead assume that politics will fail to solve the problem, and analyze the consequences of that in more detail, not to scare people but to work out how to live in that scenario, you are seen as expressing disloyalty to the system and hostility toward those who will suffer from that failure.

I think we see something similar with other trends framed as negatives, like global warming, bigger orgs, or increased regulation. Once such a trend is framed as an official bad thing which public policy might conceivably reduce, it becomes (mildly) taboo to seem to just accept the change and analyze how to deal with its consequences.

All of which seems bad news for my book, which mostly just accepts the “robots take over, humans lose wages and get sidelined” scenario and analyzes its consequences. No matter how good my reasons for thinking politics will fail to prevent this, many will react as did Nikola Danaylov, with outrage at my hostility toward the poor suffering losers.

GD Star Rating
loading...
Tagged as: , , , ,

Fewer Harder Steps

Somewhere between 1.75 billion and 3.25 billion years from now, Earth will travel out of the solar system’s habitable zone and into the “hot zone,” new research indicates. … In the habitable zone [HZ], a planet (whether in this solar system or an alien one) is just the right distance from its star to have liquid water. Closer to the sun, in the “hot zone,” the Earth’s oceans would evaporate. (more; source)

Fifteen years ago, the best estimates I found were that life appeared on Earth from 0.0 to 0.7 billion years after such life was possible at all, and that simple life would only continue to be possible on Earth for another 1.1 billion years. (Earth is now 4.5 billion years old.) These two numbers seemed close enough to be consistent with a simple model of Earth being very lucky to originate intelligence life.

This simple model says that a planet goes from no life to intelligent life by passing some “hard steps,” like inventing life, sex, multi-cellular bodies, and intelligence. The system had a constant chance per unit time of completing each new step, but these chances could be very different. That is, the steps could have very different difficulties; it might be much easier to invent sex than to invent life.

Even so, I showed fifteen years ago that that if all these steps were hard, i.e., if on a random planet each step would usually take longer than the time window for life on the planet, then given that intelligence eventually appears before the window closes, the actual distribution of durations observed between the steps (and the duration between the last step and the end of the life window) would be roughly equal. (To be precise, drawn from the same distribution with a modest variance.)

A standard account of five major evolutionary events by William Schopf roughly fit this model: his durations were 0.0−0.7,0.5,0.6,0.7,1.1, and 1.7−2.4 billion years. And that longest period is one we know little about, so it might really cover two steps.

However, this new result quoted above, of 1.75 or 3.25 billion years for time remaining on Earth, makes this simple model harder to accept. And it is actually worse than quoted above. Those two numbers are from two different models of how the Sun’s brightness is expected to increase with time. But both numbers assume few clouds on Earth. If we instead assume that the fraction of Earth covered by clouds will later be 50% or 100%, then the time left for life is 5 or 20 billion years.

In contrast, a best estimate now is that life appeared on Earth from 0.0 to 0.6 billion years after it was first possible. So even the best case ratio for these durations is 1.75/0.6 = 3, and a more believable ratio is 3.25/0.3 = 10. These seem hard to accept as a ratio of typical durations drawn from the same distribution. So how can we change the model to better fit this data?

First, this pushes us to give up the idea that life evolved on Earth at all, or that the origin of life was a hard step. If life evolved elsewhere, that could give a lot more time for hard steps to be achieved. After all, the universe is now 13.8 billion years old.

Second, this also pushes us, if a bit more weakly, to give up the idea that the evolution of intelligence was a hard step. Intelligence seems to have appeared only 0.6 billion years after the appearance of multi-cellular animals, and we seem to see a somewhat steady progression in increasing brain size, in contrast to the constant random search and random success of the model.

Third, if there is a hard step associated with our immediate future, it is not of the sort in this simple model, something we keep trying until we succeed. Instead, either something will destroy us soon, or not.

Finally, there seems to be only room for one or two hard steps so far in the history of Earth. And the more that some periods require easy but long steps, the less room there is. For example, it might be that Earth had to wait for its atmosphere to slowly fill up with oxygen before key further developments could be enabled. Or it might be that multi-cellular animals just took a certain slow delay to develop large smart animals.

The fewer hard steps there are, the harder each steps must be on average. So this news suggests should increase our estimate of just how hard is each hard step.

The best candidate for a hard step in the history of life on Earth seems to be the origin of Eukaryotes. Since the oldest eukaryotic fossil is approximately 1.5 billion years old, they appeared reasonably close to the middle of the window for life on Earth.

GD Star Rating
loading...
Tagged as:

Wanted: Know-It-Some Critics

Last November I said I wanted to write a book on a complex subject, but found it hard to simultaneously work out what I think on the subject, and to also write so as to engage a wide audience well. I wondered why book authors don’t do this in two steps:

First I’d write a pre-book, which states my main claims and arguments directly and clearly, using expert language, for an expert audience. I’d then circulate that pre-book privately among experts and useful thinkers of various sorts, seeking criticism of my arguments. Then using their feedback, I’d revise my claims and arguments, and write an engaging accessible book that can be circulated widely. (more)

Well even though few ever do this, I decided to try it anyway. And I now have a 62,000 word book draft, on the subject of em econ (see posts,Tedx video), i.e., on the social implications of a world dominated by brain-emulation-based AI. This draft isn’t especially fun or readable, or engaging to a wide audience. But it isn’t terrible, and seems a sufficient basis for eliciting thoughtful criticism.

I’ve asked around within my private social network, gotten some good feedback, and changed my draft lots in response. But I’d feel irresponsible if didn’t seek more critics. So let me put it out there: who wants to read and comment on my book draft?

Now I don’t want to post the draft publicly; I might want to sell it as a separate book later. So I don’t want to just give it to anyone who asks; I need to set a non-trivial standard. And the standard I’ve picked is: you should know something about something.

My book is on how the world changes if a certain tech gets cheap: computer-based emulations of human brains. And my analysis suggests that this changes many aspects of society. To give you some idea of relevant topics, I’ve included a current book outline below the fold.

So to be a useful critic, you should know something about brains, computers, business, or some other important part of our social world. You don’t need a Ph.D. of course; most knowledge in our world isn’t held by Ph.D.s. Years of experience can work wonders. But on the subjects you understand, you should know lots more than does a typical high school graduate on a typical subject, i.e., almost nothing. (And of course you also need a minimal ability to generalize what you know to new situations, and to express what you know somehow to me.)

If you are interested and think you qualify, email me at: rhanson@gmu.edu. Here is that current outline:

Continue reading "Wanted: Know-It-Some Critics" »

GD Star Rating
loading...
Tagged as: ,

Why Think Of The Children?

When a cause seems good, a variation focused on children seems better. For example, if volunteering at a hospital is good, volunteering at a children’s hospital is better. If helping Africa is good, helping African kids is better. If teaching people to paint is good, teaching children to paint is better. If promoting healthy diets is good, promoting healthy diets in kids is better. If protecting people from war is good, protecting kids from war is better. If comforting lonely people is good, comforting lonely kids is better.

Why do most idealistic causes seem better when directed at kids? One explanation is that kids count a lot more in our moral calculus, just as humans count more than horses. But most would deny this I think. Another explanation is that kids just consistently need more of everything. But this just seems wrong. Kids are at the healthiest ages, for example, and so need health help the least. Even so, children health is considered a very noble cause.

For our foragers ancestors, child rearing was mostly a communal activity, at least after the first few years. So while helping to raise kids was good for the band overall, each individual might want to shirk on their help, and let others do the work. So forager bands would try to use moral praise and criticism to get each individual to do their kid-raising share. This predicts that doing stuff for kids would seem especially moral for foragers. And maybe we’ve retained such habits.

My favored explanation, however, is that people today typically do good in order to seem kind, in order to attract mates. If potential mates are considering raising kids with you, then they care more about your kindness toward kids than about your kindness toward others. So to show off the sort of kindness that your audience cares about, you put a higher priority on kindness to kids.

Of course if you happened to be one of those exceptions really trying to just to make the world a better place, why you’d want to correct for this overemphasis on kids by avoiding them. You’d want to help anyone but kids. And now that you all know this, I’ll wait to hear that massive rumbling from the vast stampeed of folks switching their charity away from kids. … All clear, go ahead. … Don’t be shy …

GD Star Rating
loading...
Tagged as: , , ,

On Accidental Altruists

Gordon Tullock:

All of us like to think that we are better, more altruistic, more charitable, than we actually are. But, although we have this desire, we don’t want to pay for it. We are willing to make a sacrifice of perhaps 5 percent of our real income in charitable aid to others. We would like to think of ourselves, however, as making much larger transfers without actually making them. One of the functions of the politician in our society is to meet this demand. (more)

Me:

As an adolescent I seem to have deeply internalized the idea of great scientists/visionaries as heroes. I long judged my efforts by their standards – what would increase the chance that I would become such a person, or be approved by one. Marching to the beat of this unusual status audience drummer often led me to “non-conform” by doing things that less impressed folks around me. But I very definitely wanted to impress someone. (more)

Let me admit it here and now: I am an accidental altruist, driven more by ambition than empathy. I have sought glory by understanding deep mysteries of quantum physics, disagreement, and human hypocrisy, by foreseeing the next great era after ours, and by inventing and deploying new forms of info aggregation in governance.

I happen to believe that such actions will in fact give an unusually large expected benefit to the world. This is because I believe that in our era a great many things go quite wrong because we do not understand ourselves and our future, and because we aggregate info badly. But, I must admit that I might still pursue similar glories even if they gave little benefit. And perhaps even if they created modest harm.

Now it is not a complete accident that our society offers glory to those who improve our governance or deepen our understanding of the world and ourselves. Or that more glory often goes to those whose contributions seem more likely to benefit us all. This is a somewhat functional way for a society to coordinate to improve itself. But the credit here should probably go to the slow process of cultural selection, whereby cultures with more functional institutions win out in competition with other cultures.

I suspect that many will think less of me if they see my altruism as more accidental, relative to intentional. And this makes sense to the extent that people use altruism as a signal of niceness. That is, if you use how nice someone is toward the world as a whole as signal of how nice they would treat you as a friend, spouse, colleague, etc., then it makes sense to put less weight on accidental niceness. We accidental altruists probably tend to be less nice.

But if what you wanted was just to encourage more altruism toward the world, I’d think you’d mostly just want to celebrate people more who actually do more good for the world, without caring that much if they are driven more by glory or empathy. Sure, when faced with an option where they might gain glory by hurting the world, such a person might well choose it. But in areas where pursuing glory tends mostly to help the world, my guess is that the world is helped more if we just praise all good done for the world, intend of focusing our praise mainly on those who do good for the purest of reasons. And I think we all pretty much know this.

So why don’t we just celebrate all good done, regardless of motive? I’d guess it is because most of us care less about how to help the world overall, and more about how to use the altruism of others as a signal of their personal inclinations and abilities.

GD Star Rating
loading...
Tagged as: , ,

Boss Hypocrisy

In our culture, we are supposed to resent and dislike bosses. Bosses get paid too much, are mad with power, seek profits over people, etc. In fiction, we are mainly willing to see bosses as good when they run a noble work group, like a police, military, medicine, music, or sport group. In such rare cases, it is ok to submit to boss domination to achieve the noble cause. Or a boss can be good if he helps subordinates fight a higher bad boss. Otherwise, a good person resents and resists boss domination. For example:

The [TV trope of the] Benevolent Boss is that rarity in the Work [Sit]Com: a superior who is actually superior, a nice guy who listens to employee problems and really cares about the issues of those beneath him. … A character that is The Captain is likely, but not required, to be a Benevolent Boss.
Contrast with Bad Boss and Stupid Boss. Compare Reasonable Authority Figure. In more fantastic works, this character usually comes in the form of Big Good. On the other hand, an Affably Evil character can be a benevolent boss with his mooks, as well.
In The Army, he is often The Captain, Majorly Awesome, Colonel Badass, The Brigadier, or even the Four Star Badass and may be A Father to His Men.
For some lucky workers, this is Truth in Television. For a lot of other people, this is some sort of malicious fantasy. (more)

But here is a 2010 (& 2011) survey of 1000 workers (30% bosses, half blue collar):

Agree or completely agree with:

  • You respect your boss 91%
  • You think your boss trusts you 91%
  • You think your boss respects you 91%
  • You trust your boss 86%
  • If your job was on the line, your boss would go to bat for you 78%
  • You consider your boss a friend 61%
  • You would not change a thing about your boss 59%
  • Your boss has more education than you 53%
  • You think you are smarter than your boss 37%
  • You aspire to have the bosses job 30%
  • You work harder than your boss 28%
  • You feel pressure to conform to your bosses hobbies/interests in order to get ahead 20% (more; more; more)

In reality most people respect and trust their bosses, see them as a friend, and so on. Quite a different picture than the one from fiction.

Foragers had strong norms against domination, and bosses regularly violate such norms. We retain a weak allegiance to forager norms in fiction and when we talk politics. But we also have deeper more ancient mammalian instincts to submit to powers above us. And also, our competitive economy probably tends to make real bosses be functional and useful, and we spend enough time on our jobs to see that.

Many other of our cultural presumptions are probably similar. We give lip service to them in the far modes of fiction and politics, but we quickly reject them in the near mode of concrete decisions that matter to us.

GD Star Rating
loading...
Tagged as: , , ,

Let Re-Discovery Evade Patents

In this post I’m going to explain why patents can be a good idea, why they often go wrong today, and a way to fix that problem. And I’ll do that all in the context of a situation you should understand well: finding a shorter route to drive from home to work. (This post is ~1600 words, and so longer than usual.)

Imagine that you usually take a particular route from home to work, and some firm offers to find you a better route. You tell them your current route, and they tell you that they have found a different route that will save you thirty seconds a day, which over a year adds up to eight hours. You can inspect their route to verify their claim, but only if you agree that you can’t use that route (or anything close) unless you pay them a mutually agreeable fee. (Assume they can enforce that, by seeing your car’s driving path records. And assume you can verify their claim somehow.) You agree, inspect and verify, and then agree to pay them one hundred dollars, which is well below your value of saving eight hours of driving, and above their cost of finding the route.

This example contains an info property right: once you agree not to use their route unless you pay for it, then they own a right to your use of that route. Since the route is info, what they own is info. The prospect of owning that info right gives the firm an incentive to work to find that route. And because they must find a mutually agreeable price, their incentive to work is neither too much nor too little. An agreeable price must lie between their cost of finding the route and your added value from using it.

Now imagine that you are one of hundreds of drivers who go from the same initial home area to the same final work destination. Now this route-finding firm wants to sell a better route to all of you. But there is a problem. Once this firm sells the route to a few of of you, the others may learn of that route from these few buyers, either by being told or by following their cars. In this case the total price the firm could get from all the drivers might be much less than the sum of driver values for using the better route. Thus the firm’s incentive to work to find a better route could be too low. That is, this group of drivers could be better off it they joined together to paid the firm more to find a better route. But joining is too hard, so it doesn’t happen. Continue reading "Let Re-Discovery Evade Patents" »

GD Star Rating
loading...
Tagged as: ,

Value Explosions Are Rare

Bryan Caplan:

I’m surprised that Robin is so willing to grant the plausibility of superintelligence in the first place. Yes, we can imagine someone so smart that he can make himself smarter, which in turn allows him to make himself smarter still, until he becomes so smart we lesser intelligences can’t even understand him anymore. But there are two obvious reasons to yawn. 1. … Even high-IQ people who specifically devote their lives to the study of intelligence don’t seem to get smarter over time. If they can’t do it, who can? 2. In the real-world, self-reinforcing processes eventually asymptote. (more)

Bryan expresses a very standard economic intuition, one with which I largely agree. But since many of my readers aren’t economists, perhaps I should elaborate.

Along most dimensions, having more of a good thing leads to less and less more of other good things. In economics we call this “diminishing returns,” and it is a very basic and important principle. Of course it isn’t always true. Sometimes having a bit more of one good thing makes it even easier to get a bit more of other good things. But not only is this rare, it almost always happens within a limited range.

For example, you might hope that if you add one more feature to your product, more customers will buy it, which will give you more money and info to add another feature, and so on in an vast profit explosion. This could make the indirect value of that first new feature much bigger than it might seem. Or you might hope that that if achieve your next personal goal, e.g., to win a race, then you will have more confidence and attract more allies, which will make it easier for you to win more and better contests, which lead to an huge explosion of popularity and achievement. This might make it very important to win this next race.

Yes, such things happen, but rarely, and they soon “run out of steam.” So the value of a small gain is only rarely much more than it seems. If someone ask you to pay extra for a product because it will start you one of these explosions, you should question them skeptically. Don’t let them do a Pascal’s wager on you, saying even if the chance is tiny, a big enough explosion would justify it. Ask instead for concrete indicators that this particular case is an exception to the usual rule. Don’t invest in a startup just because, hey, their hockey-stick revenue projections could happen.

So what are some notable exceptions to this usual rule? One big class of exceptions is when you get value out of destroying the value of others. Explosions that destroy value are much more common that those that create value. If you break just one little part in a car, then the whole car might crash. Start one little part of a house burning and the whole house may burn down. Say just one bad thing about a person to the right audience and their whole career may be ruined. And so on. Which is why there are a lot of explosions, both literal and metaphorical, in war, both literal and metaphorical.

Another key exception is at the largest scale of aggregation — the net effect of on average improving all the little things in the world is usually to make it easier for the world as a whole to improve all those little things. For humans this effect seems to have been remarkably robust. I wish I had a better model to understand these exceptions to the usual rule of rare value explosions.

GD Star Rating
loading...
Tagged as: ,