Monthly Archives: September 2008

Doable Green

Since I recently said carbon emissions seemed a lost cause, let me emphasize that cap and trade fisheries are big, green, and feasible:

Two years ago, a team of researchers took a broad look at the world’s commercial fisheries and predicted that excessive harvesting would cause them all to collapse by 2048. Now, three other scientists have taken an equally broad look at how fisheries are managed and come up with a more hopeful view. …

[Researchers show] that stocks are much less likely to collapse if fishers own rights to fish them, called catch shares. If implemented worldwide, they say, this kind of market-based management could reverse a destructive global trend. Says David Festa of the Environmental Defense Fund in San Francisco, California, "This gives definitive, concrete proof that this tool does end overfishing." …

Worm’s team had analyzed all the large marine ecosystems in the world and found that those with declining biodiversity tended to have more collapsed fisheries, defined as yields less than 10% of historical maximums.  .. Each fisher was allocated a number of individual transferable quotas (ITQs), which they can use to catch fish or sell to others. The quotas are a percentage of the total allowable catch, which is set by regulators each year …

Australia, New Zealand, and Iceland, among others, claimed success with this approach, but no one had done a comprehensive analysis. Costello, Gaines, and Lynham examined more than 11,135 fisheries worldwide. Only 14% of the 121 fisheries using ITQs or similar methods had collapsed, compared with the 28% collapsed among fisheries without ITQs. Had all the world’s fisheries implemented catch-share management in 1970, the researchers found, only 9% would have collapsed by 2003. The findings are conservative, Costello explains, because most ITQ systems have been put into place fairly recently; each year of rights-based management makes a collapse 0.5% less likely.

Added 9/26: Global carbon emissions increased 2.9% in ’07!

GD Star Rating
loading...
Tagged as:

Give it to Me Straight! I Swear I Won’t be Mad!

I have an American friend (same guy as in this earlier post) who lived for a number of years in Mexico.  He married a Mexican woman, and while he always spoke to his kids in English, their real first language was Spanish.  He recently moved back to the U.S., and he enrolled his oldest daughter in kindergarten.  The school gave her some kind of language evaluation, and they concluded that she was slightly behind in English, and said they would like to give her some kind of limited special instruction if her parents wanted it.  My friend and his wife were inclined to go along with what the teachers thought, but they wanted to know the answers to a few common-sense questions: how behind was the kid really, was what they would do for her during the special instruction time really worth giving up whatever she would miss in the regular class, and so on.  The problem was, they were having a hard time getting any straight answers out of the teachers, and they were pretty sure they knew why: these very nice, well-meaning teachers were so worried about offending them that they couched every answer in a million caveats and weasel words.  My friend said he said he was dying to say something like: "I hereby unconditionally vow not to sue you, hate you, or speak or think ill of you in any way.  Now will you please just tell me what’s going on with my kid?!?"

Don’t get me wrong: that hyper-sensitivity comes mostly from a good place, and I certainly don’t want to go back 50 years when a kid like that would just be thrown in the deep end of the pool.  But come on!

GD Star Rating
loading...
Tagged as:

My Naturalistic Awakening

Followup toFighting a Rearguard Action Against the Truth

In yesterday’s episode, Eliezer2001 is fighting a rearguard action against the truth.  Only gradually shifting his beliefs, admitting an increasing probability in a different scenario, but never saying outright, "I was wrong before."  He repairs his strategies as they are challenged, finding new justifications for just the same plan he pursued before.

(Of which it is therefore said:  "Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated.  Surrender to the truth as quickly as you can.  Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.")

Memory fades, and I can hardly bear to look back upon those times – no, seriously, I can’t stand reading my old writing.  I’ve already been corrected once in my recollections, by those who were present.  And so, though I remember the important events, I’m not really sure what order they happened in, let alone what year.

But if I had to pick a moment when my folly broke, I would pick the moment when I first comprehended, in full generality, the notion of an optimization process.  That was the point at which I first looked back and said, "I’ve been a fool."

Continue reading "My Naturalistic Awakening" »

GD Star Rating
loading...

Pundits As Moles

In the spy business a "mole" pretends to work for A, but really works for B.  The mole may usually do very little for B, and B may avoid acting visibly on any info the mole passes on.  The idea is to move the mole up the A hierarchy, and to wait for rare high leverage situations.

Unfortunately something similar seems to hold for pundits, columnists, etc.  Before becoming a pundit someone may spend a long career as a trustworthy academic or journalist, giving careful measured evaluations of the small issues before them.  As a pundit they may even usually give thoughtful reasoned commentary on issues of moderate importance. 

But every four years, when a major election is at stake, or when a big crisis appears, styles change.  In their world folks mutter, "pull out all the stops, this is really important."  They may retain the outward appearance of keeping to their previous standards, but in fact they start to say whatever it takes to push "their side." 

Just as moles mean we can rely on our spies least when we need them most, pushy election pundits also imply we can rely on our pundits least when we need them most.  (This key mole insight came from a talk by Robert Axelrod.)

GD Star Rating
loading...
Tagged as:

Fighting a Rearguard Action Against the Truth

Followup toThat Tiny Note of Discord, The Importance of Saying "Oops"

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don’t matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

But as our story begins – as the sky lightens to gray and the tip of the sun peeks over the horizon – Eliezer2001 hasn’t yet admitted that Eliezer1997 was mistaken in any important sense.  He’s just making Eliezer1997‘s strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"…

…which means that Eliezer2001 now has a line of retreat away from his mistake.

I don’t just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn’t have to cough out his whole mistake in one huge lump.

If you think this sounds like Eliezer2001 is too slow, I quite agree.

Continue reading "Fighting a Rearguard Action Against the Truth" »

GD Star Rating
loading...

White Swans Painted Black

Nassim Taleb has an article related to the current financial crisis. While much of what he says is true, he misleads when he implies that the recent collapse of financial companies resulted from a Black Swan. He claims:

use of probabilistic methods for the estimation of risks did just blow up the banking system

Continue reading "White Swans Painted Black" »

GD Star Rating
loading...
Tagged as: ,

Correcting Biases Once You’ve Identified Them

Most of the discussion on this blog seems to focus on figuring out how to identify biases.  We implicitly assume that this is the hard part; that biases can be really sneaky and hard to ferret out, but that once you’ve identified a bias, correcting it is pretty straightforward and mechanical.  If you’ve figured out that you have a bias that causes you to systematically overestimate the probability of a particular kind of event happening by .2, you simply subtract .2 from future estimates (or whatever).  But it seems to me that actually correcting a bias can be pretty hard even once it’s been identified.  For example, I have a tendency to swing a bit too late at a (slow-pitch) softball.  I’m sure this bias could be at least partially corrected with effort, but it is definitely not simply a matter of saying to myself: "swing .5 seconds sooner than you feel like you should swing."  That just can’t be done in real time without screwing up the other mechanics of the swing.

I think this is also a problem for more consequential matters  In real decision-making situations, where there are elements of the problem that need attention besides the (already identified) bias, it is not going to be a trivial matter to fix the bias without screwing up some other part of the problem even worse.  I’m not sure this is the right way to put it, but it seems like OB engineering is a seperate and important discipline distinct from OB science.

GD Star Rating
loading...
Tagged as:

That Tiny Note of Discord

Followup toThe Sheer Folly of Callow Youth

When we last left Eliezer1997, he believed that any superintelligence would automatically do what was "right", and indeed would understand that better than we could; even though, he modestly confessed, he did not understand the ultimate nature of morality.  Or rather, after some debate had passed, Eliezer1997 had evolved an elaborate argument, which he fondly claimed to be "formal", that we could always condition upon the belief that life has meaning; and so cases where superintelligences did not feel compelled to do anything in particular, would fall out of consideration.  (The flaw being the unconsidered and unjustified equation of "universally compelling argument" with "right".)

So far, the young Eliezer is well on the way toward joining the "smart people who are stupid because they’re skilled at defending beliefs they arrived at for unskilled reasons".  All his dedication to "rationality" has not saved him from this mistake, and you might be tempted to conclude that it is useless to strive for rationality.

But while many people dig holes for themselves, not everyone succeeds in clawing their way back out.

And from this I learn my lesson:  That it all began –

– with a small, small question; a single discordant note; one tiny lonely thought…

Continue reading "That Tiny Note of Discord" »

GD Star Rating
loading...

Noble Lies?

A New Scientist book review:

In the face of life’s inconvenient facts – alcoholism, drug addiction, depression and craziness, to name a few – pseudoscientific medical concepts allow us to cast difficult moral problems as simple factual questions, readily soluble in the lab and in the hospital. Gary Greenberg’s The Noble Lie is an impressive and fascinating round-up of such pseudoscientific notions and the ways in which they have come to count as genuine illnesses.

For instance, Greenberg explains how alcoholism’s transition from vice to disease was a welcome one, especially following Prohibition. It was long viewed as an allergy, though the specific allergen persistently failed to appear. Even today, neither its disease-nature nor any possible cures have manifested themselves. Regardless, people are happy to accept the idea that addiction is a medical illness, perhaps, Greenberg suggests, because of our ambivalence towards the role of pleasure and our uncertainties about free will and self-determination. "With the disease model we have an answer," he writes, "one that has the imprimatur of science; addiction isn’t wrong, it’s sick."

Continue reading "Noble Lies?" »

GD Star Rating
loading...
Tagged as: ,

Horrible LHC Inconsistency

Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I’m horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such… then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that’s taking into account my uncertainty about whether the anthropic principle really works that way.)

Even having noticed this triple inconsistency, I’m not sure in which direction to resolve it!

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)

GD Star Rating
loading...