Category Archives: Science

CO2 Warming Looks Real

Many have bent my ear over the last few months about global warming skepticism.   So I’ve just done some moderate digging, and conclude:

  1. In the last half billion years, CO2 has at times been 15 times denser, but not more than 10C warmer.  So that is about as bad as warming could get.
  2. In the last million years, CO2 usually rises after warming; clearly warming often causes CO2 increases.
  3. CO2 is clearly way up (~30%) over 150 years, and rising fast, mainly due to human emissions.  CO2 is denser than its been for a half million years.
  4. The direct warming effect of CO2 on warming is mild and saturating; the effects of concern are indirect, e.g., water vapor and clouds, but the magnitude and sign of these indirect effects are far from clear.
  5. Climate model builders make indirect effect assumptions, but most observers are skeptical they’ve got them right.
  6. This uncertainty alone justifies substantial CO2 mitigation (emission cuts or geoengineering), if we are risk-averse enough and if mitigation risks are weaker.
  7. Standard warming records show a real and accelerating rise, roughly matching the CO2 rise.
  8. Such warming episodes seem common in recent history.
  9. The match between recent warming and CO2 rise details is surprisingly close, substantially raising confidence that CO2 is the main cause of recent warming.  (See this great analysis by Pablo Verdes.)  This adds support for mitigation.
  10. Among the few bets on global warming, the consensus is for more warming.
  11. Geoengineering looks far more likely to be feasible and acceptable mitigation than emissions cuts.
  12. Some doubt standard warming records, saying they are biased by urban measuring sites and arbitrary satellite record corrections.   Temperature proxies like tree rings diverge from standard records in the last fifty years. I don’t have time to dig into these disputes, so for now I defer to the usual authorities.

It was mostly skeptics bending my ear, and skeptical arguments are easier to find on the web.  But for now, the other side has convinced me.

Added: The Verdes papers is also here.  Here is his key figure: Continue reading "CO2 Warming Looks Real" »

GD Star Rating
loading...
Tagged as: , ,

Reinventing Idea Futures

From the April Physics World:

A key problem, suggests mathematical physicist Eric Weinstein of the Natron Group, a hedge fund in New York, is that it is too easy for scientists in the “establishment” of any field to cut down new ideas, and to do so without really putting anything at risk, thereby leading to a culture that is systematically biased toward caution. …

Weinstein suggests another idea — that we should borrow some ideas from financial engineering and make scientists back up their criticisms by taking real financial risks. You think that some new theory is utterly worthless and deserving of ridicule? In the world Weinstein envisions, you could not trash the research in an anonymous review, but would buy some sort of option giving you a financial stake in its scientific future, an instrument that would pay off if, as you expect, the work slides noiselessly into obscurity. The money would come from the theory’s proponents, who would similarly benefit if it pans out into the next big thing.

Weinstein’s point is that markets, in theory at least, work efficiently and — putting the current financial meltdown to one side — lead to the accurate valuation of products. They exploit the “wisdom of crowds”, as a popular book of the same title recently put it. Take the famous electronic prediction markets at the University of Iowa, which pool the views of thousands of diverse individuals and consistently seem to give better predictions than any expert. …

“It would be more efficient,” he says, “if the maverick could demand of the critic, if my theory is so obviously wrong, why don’t you quantify that by writing me an options contract based on future citations in the top 20 leading journals secured by your home, furniture, holiday home and pension?”

This article makes it seem like Eric reinvented idea futures.  Except that Eric and I discussed the concept last May, when we had two phone conversations and exchanged seven emails.

In 1996, a Russ Ray published a paper in Futures Research Quarterly that was basically cut and paste from my Idea Futures paper.  Imitation is the sincerest form of flattery, right?  Hat tip to Jef Allbright.

GD Star Rating
loading...
Tagged as: , ,

Cloud Seeding Works

Folks have been seeding clouds to induce rain for over a century, but weather variability has made it hard to collect clear evidence that seeding increases rainfall.  Because of this, many consider cloud-seeding to be a psuedo-science.  But the latest Journal of Applied Meteorology and Climatology presents relatively strong support:

An analysis of cloud seeding activity for the period 1960–2005 over a hydroelectric catchment (target) area located in central Tasmania is presented. The analysis is performed using a double ratio on monthly area averaged rainfall for the months May–October. Results indicate that increases in monthly precipitation are observed within the target area relative to nearby controls during periods of cloud seeding activity. Ten independent tests were performed and all double ratios found are above unity with values that range from 5–14%. Nine out of ten confidence intervals are entirely above unity and overlap in the range of 6–11%. Nine tests obtain levels of significance greater than the 0.05 level. If the Bonferroni adjustment is made to account for multiple comparisons, six tests are found to be significant at the adjusted alpha level. Further field measurements of the cloud microphysics over this region are needed to provide a physical basis for these statistical results.

Absence of evidence is not evidence of absence; sometimes it can just take a long time for clear evidence to accumulate.

GD Star Rating
loading...
Tagged as:

The intervention and the checklist: two paradigms for improvement

I’m working on a project involving the evaluation of social service innovations, and the other day one of my colleagues remarked that in many cases, we really know what works, the issue is getting it done. This reminded me of a fascinating article by Atul Gawande on the use of checklists for medical treatments, which in turn made me think about two different paradigms for improving a system, whether it be health, education, services, or whatever.

The first paradigm–the one we’re taught in statistics classes–is of progress via “interventions” or “treatments.” The story is that people come up with ideas (perhaps from fundamental science, as we non-biologists imagine is happening in medical research, or maybe from exploratory analysis of existing data, or maybe just from somebody’s brilliant insight), and then these get studied (possibly through randomized clinical trials, but that’s not really my point here; my real focus is on the concept of the discrete “intervention”), and then some ideas are revealed to be successful and some are not (with allowances taken for multiple testing or hierarchical structure in the studies), and the successful ideas get dispersed and used widely. There’s then a secondary phase in which interventions can get tested and modified in the wild.

The second paradigm, alluded to by my colleague above, is that of the checklist. Here the story is that everyone knows what works, but for logistical or other reasons, not all these things always get done. Improvement occurs when people are required (or encouraged or bribed or whatever) to do the 10 or 12 things that, together, are known to improve effectiveness. This “checklist” paradigm seems much different than the “intervention” approach that is standard in statistics and econometrics.

The two paradigms are not mutually exclusive. For example, the items on a checklist might have had their effectiveness individually demonstrated via earlier clinical trials–in fact, maybe that’s what got them on the checklist in the first place. Conversely, the procedure of “following a checklist” can itself be seen as an intervention and be evaluated as such.

And there are other paradigms out there, such as the self-experimentation paradigm (in which the generation and testing of new ideas go together) and the “marketplace of ideas” paradigm (in which more efficient systems are believed to evolve and survive through competitive pressures).

I just think it’s interesting that the intervention paradigm, which is so central to our thinking in statistics and econometrics (not to mention NIH funding), is not the only way to think about process improvement. A point that is obvious to nonstatisticians, perhaps.

GD Star Rating
loading...
Tagged as:

An Especially Elegant Evpsych Experiment

Followup toAdaptation-Executers not Fitness-Maximizers, The Evolutionary-Cognitive Boundary

"In a 1989 Canadian study, adults were asked to imagine the death of children of various ages and estimate which deaths would create the greatest sense of loss in a parent. The results, plotted on a graph, show grief growing until just before adolescence and then beginning to drop. When this curve was compared with a curve showing changes in reproductive potential over the life cycle (a pattern calculated from Canadian demographic data), the correlation was fairly strong. But much stronger – nearly perfect, in fact – was the correlation between the grief curves of these modern Canadians and the reproductive-potential curve of a hunter-gatherer people, the !Kung of Africa. In other words, the pattern of changing grief was almost exactly what a Darwinian would predict, given demographic realities in the ancestral environment…  The first correlation was .64, the second an extremely high .92."

(Robert Wright, summarizing:  "Human Grief:  Is Its Intensity Related to the Reproductive Value of the Deceased?"  Crawford, C. B., Salter, B. E., and Lang, K.L.  Ethology and Sociobiology 10:297-307.)

Disclaimer:  I haven't read this paper because it (a) isn't online and (b) is not specifically relevant to my actual real job.  But going on the given description, it seems like a reasonably awesome experiment.  [Gated version here, thanks Benja Fallenstein.  Odd, I thought I searched for that.  Reading now… seems to check out on the basics.  Correlations are as described, N=221.]

The most obvious inelegance of this study, as described, is that it was conducted by asking human adults to imagine parental grief, rather than asking real parents with children of particular ages.  (Presumably that would have cost more / allowed fewer subjects.)  However, my understanding is that the results here squared well with the data from closer studies of parental grief that were looking for other correlations (i.e., a raw correlation between parental grief and child age).

That said, consider some of this experiment's elegant aspects:

  • A correlation of .92(!)  This may sound suspiciously high – could evolution really do such exact fine-tuning? – until you realize that this selection pressure was not only great enough to fine-tune parental grief, but, in fact, carve it out of existence from scratch in the first place.
  • People who say that evolutionary psychology hasn't made any advance predictions are (ironically) mere victims of "no one knows what science doesn't know" syndrome.  You wouldn't even think of this as an experiment to be performed if not for evolutionary psychology.
  • The experiment illustrates as beautifully and as cleanly as any I have ever seen, the distinction between a conscious or subconscious ulterior motive and an executing adaptation with no realtime sensitivity to the original selection pressure that created it.

Continue reading "An Especially Elegant Evpsych Experiment" »

GD Star Rating
loading...

Our Biggest Surprise

We as a civilization know a lot more today than a random hunter-gatherer from fifty thousand years ago knew.  At a fun dinner with Cosmic Variances's thoughtful Sean Carroll last night, I asked:  What have we learned that is the most surprising?  Sean initially answered "quantum mechanics" but I complained that bundles together too many different things we've learned; I instead want to know what single feature of have we learned would most surprise our distant ancestors? 

Sean then suggested non-determinism, that quantum mechanics appears to suggest that the past does not determine the future.  I suggested what would most surprise our distant ancestors is how big is our universe.  It is big in time and space, in extent and detail, and in the range of things that can fill this extended detailed spacetime. 

So what would you say has been our biggest surprise, weighing not just raw info but also that info's relevance? 

Added:  OK, I see two related surprises, one empirical and one logical. The empirical surprise is that the universe really is big.  The logical surprise is that a big enough universe with a small number of simple essenses can reproduce all of the complex local phenomena that one might otherwise explain via design or a large number of essences.  So per Eliezer and Julian, enough inanimate objects can produce animate object behavior, and per Jed enough incremental adjustments can produce bio and social order.

More added:  Sean remembers the conversation a bit differently; he's probably right. He also asks "the complementary question: what is the most surprising thing about the universe that we haven’t yet discovered, but plausibly could?"

GD Star Rating
loading...
Tagged as:

Alien Bad Guy Bias

The Bad Guy Bias applies to Earth signals to aliens.  From the NYT:

The makers of the new movie “The Day the Earth Stood Still” have arranged for it to be beamed into space on … the same day the movie opens here on planet Earth. … Dr. Shostak, who was a consultant for the new movie … [says] there are some people, he acknowledges, who might worry that broadcasting “The Day the Earth Stood Still” could be inimical to our interests. He added, “I think that if these people are truly worried about such things, they might best begin by shutting down the radar at the local airport.”

Shostak is right; compared to intentional signals, unintentional signals are a million times larger:

There are three large-dish instruments in the world that are currently employed for doing radar investigations of planets, asteroids and comets: ART (Arecibo Radar Telescope), GSSR (Goldstone Solar System Radar), and EPR (Evpatoria Planetary Radar). Radiating power and directional diagram of these instruments is so outstanding that it also allows us to emit radio messages to outer space, which are practically detectable everywhere in the Milky Way. This dedicated program is called METI (Messaging to Extra-Terrestrial Intelligence) …

Continue reading "Alien Bad Guy Bias" »

GD Star Rating
loading...
Tagged as: ,

Test Near, Apply Far

Companies often ask me if prediction markets can forecast distant future topics.  I tell them yes, but that is not the place to test any doubts about prediction markets. To vet or validate prediction markets, you want topics where there will be many similar forecasts over a short time, with other mechanisms making forecasts that can be compared. 

If you came up with an account of the cognitive processes that allowed Newton or Einstein to make their great leaps of insight, you would want to look for where that or related accounts applied to more common insight situations.  An account that only applied to a few extreme "geniuses" would be much harder to explore, since we know so little about those few extreme cases.

If you wanted to explain the vast voids we seem to see in the distant universe, and you came up with a theory of a new kind of matter that could fill that void, you would want to ask where nearby one might find or be able to create that new kind of matter.  Only after confronting this matter theory with local data would you have much confidence in applying it to distant voids.

It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful, we need to vet them, and that is easiest "nearby", where we know a lot.  When we want to deal with or understand things "far", where we know little, we have little choice other than to rely on mechanisms, theories, and concepts that have worked well near.  Far is just the wrong place to try new things.

There are a bazillion possible abstractions we could apply to the world.  For each abstraction, the question is not whether one can divide up the world that way, but whether it "carves nature at its joints", giving useful insight not easily gained via other abstractions.  We should be wary of inventing new abstractions just to make sense of things far; we should insist they first show their value nearby. 

GD Star Rating
loading...
Tagged as: , ,

Animal experimentation: morally acceptable, or just the way things always have been?

Following the announcement last week that Oxford University’s controversial Biomedical Sciences building is now complete and will be open for business in mid-2009, the ethical issues surrounding the use of animals for scientific experimentation have been revisited in the media—see, for example, here , here, and here.

The number of animals used per year in scientific experiments worldwide has been estimated at 200 million—well in excess of the population of Brazil and over three times that of the United Kingdom. If we take the importance of an ethical issue to depend in part on how many subjects it affects, then, the ethics of animal experimentation at the very least warrants consideration alongside some of the most important issues in this country today, and arguably exceeds them in importance. So, what is being done to address this issue?

In the media, much effort seems to be devoted to discrediting concerns about animal suffering and reassuring people that animals used in science are well cared for, and relatively little effort is spent engaging with the ethical issues. However, it seems likely that no amount of reassurance about primate play areas and germ-controlled environments in Oxford’s new research lab will allay existing concerns about the acceptability of, for example, inducing heart failure in mice or inducing Parkinson’s disease in monkeys—particularly since scientists are not currently required to report exactly how much suffering their experiments cause to animals. Given the suffering involved, are we really sure that experimenting on animals is ethically justifiable?

In attempting to answer this question, it is disturbing to note some inconsistencies in popular views of science. Consider, for example, that by far the most common argument in favour of animal experimentation is that it is an essential part of scientific progress. As Oxford’s oft-quoted Professor Alastair Buchan reminds us, ‘You can’t make a head injury in a dish, you can’t create a stroke in a test tube, you can’t create a heart attack on a chip: it just doesn’t work’. Using animals, we are told, is essential if science is to progress. Since many people are apparently convinced by this argument, they must therefore believe that scientific progress is something worthwhile—that, at the very least, its value outweighs the suffering of experimental animals. And yet, at the same time, we are regularly confronted with the conflicting realisation that, far from viewing science as a highly valuable and worthwhile pursuit, the public is often disillusioned and exasperated with science. Recently, for example, people have expressed bafflement that scientists have spent time and money on seemingly trifling projects—such as working out the best way to swat a fly and discovering why knots form—and on telling us things that we already know: that getting rid of credit cards helps us spend less money, and that listening to very loud music can damage hearing. Why, when the public often seems to despair of science, do so many people appear to be convinced that scientific progress is so important that it justifies the suffering of millions of animals? Continue reading "Animal experimentation: morally acceptable, or just the way things always have been?" »

GD Star Rating
loading...
Tagged as: , , , , , ,

Behold Our Ancestors

< ?xml version="1.0" standalone="yes"?> < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">

Audaxviator_2 A community of the bacteria Candidatus Desulforudis audaxviator has been discovered 2.8 kilometres beneath the surface of the Earth in fluid-filled cracks of the Mponeng goldmine in South Africa. Its 60C home is completely isolated from the rest of the world, and devoid of light and oxygen. … 

99.9% of the DNA [there] belonged to one bacterium, a new species. The remaining DNA was contamination from the mine and the laboratory. …  A community of a single species is almost unheard of in the microbial world. … Deep-sea vent communities, for instance … use oxygen … produced by photosynthesising plankton at the surface. ….

Continue reading "Behold Our Ancestors" »

GD Star Rating
loading...
Tagged as: ,