Category Archives: Future

Breeding happier livestock: no futuristic tech required

I talk to a lot of people who are enthusiastic about the possibility that advanced technologies will provide more humane sources of meat. Some have focused on in vitro meat, a technology which investor Peter Thiel has backed. Others worry that in vitro meat would reduce the animal population, and hope to use futuristic genetic engineering to produce animals that feel more pleasure and less pain.

But would it really take radical new technologies to produce happy livestock? I suspect that some of these enthusiasts have been distracted by a shiny Far sci-fi solution of genetic engineering, to the point of missing the presence of a powerful, long-used mundane agricultural version: animal breeding.

Modern animal breeding is able to shape almost any quantitative trait with significant heritable variation in a population. One carefully measures the trait in different animals, and selects sperm for the next generation on that basis. So far this has not been done to reduce animals’ capacity for pain, or to increase their capacity for pleasure, but it has been applied to great effect elsewhere.

One could test varied behavioral measures of fear response, and physiological measures like cortisol levels, and select for them. As long as the measurements in aggregate tracked one’s conception of animal welfare closely enough, breeders could easily generate immense increases in livestock welfare, many standard deviations, initially at low marginal cost in other traits.

Just how powerful are ordinary animal breeding techniques? Consider cattle:

In 1942, when my father was born, the average dairy cow produced less than 5,000 pounds of milk in its lifetime. Now, the average cow produces over 21,000 pounds of milk. At the same time, the number of dairy cows has decreased from a high of 25 million around the end of World War II to fewer than nine million today. This is an indisputable environmental win as fewer cows create less methane, a potent greenhouse gas, and require less land.

 Wired has an impressive chart of turkey weight over time:
GD Star Rating
loading...
Tagged as: , ,

CO2 Warming Looks Real

Many have bent my ear over the last few months about global warming skepticism.   So I’ve just done some moderate digging, and conclude:

  1. In the last half billion years, CO2 has at times been 15 times denser, but not more than 10C warmer.  So that is about as bad as warming could get.
  2. In the last million years, CO2 usually rises after warming; clearly warming often causes CO2 increases.
  3. CO2 is clearly way up (~30%) over 150 years, and rising fast, mainly due to human emissions.  CO2 is denser than its been for a half million years.
  4. The direct warming effect of CO2 on warming is mild and saturating; the effects of concern are indirect, e.g., water vapor and clouds, but the magnitude and sign of these indirect effects are far from clear.
  5. Climate model builders make indirect effect assumptions, but most observers are skeptical they’ve got them right.
  6. This uncertainty alone justifies substantial CO2 mitigation (emission cuts or geoengineering), if we are risk-averse enough and if mitigation risks are weaker.
  7. Standard warming records show a real and accelerating rise, roughly matching the CO2 rise.
  8. Such warming episodes seem common in recent history.
  9. The match between recent warming and CO2 rise details is surprisingly close, substantially raising confidence that CO2 is the main cause of recent warming.  (See this great analysis by Pablo Verdes.)  This adds support for mitigation.
  10. Among the few bets on global warming, the consensus is for more warming.
  11. Geoengineering looks far more likely to be feasible and acceptable mitigation than emissions cuts.
  12. Some doubt standard warming records, saying they are biased by urban measuring sites and arbitrary satellite record corrections.   Temperature proxies like tree rings diverge from standard records in the last fifty years. I don’t have time to dig into these disputes, so for now I defer to the usual authorities.

It was mostly skeptics bending my ear, and skeptical arguments are easier to find on the web.  But for now, the other side has convinced me.

Added: The Verdes papers is also here.  Here is his key figure: Continue reading "CO2 Warming Looks Real" »

GD Star Rating
loading...
Tagged as: , ,

Capitas Vs. Per Capita

From a recent Science:

Agriculture and cities made human life better, right? Wrong, say archaeologists who presented stunning new evidence at the American Association of Physical Anthropologists meeting. They pooled data on standardized indicators of health from skeletal remains, including stature, dental health, degenerative joint disease, anemia, trauma, and the isotopic signatures of what they ate, and gathered data on settlement size, latitude, and socioeconomic and subsistence patterns. They found that the health of many Europeans began to worsen markedly about 3000 years ago, after agriculture became widely adopted in Europe and during the rise of the Greek and Roman civilizations. …

The team presented the first analysis of data on 11,000 individuals who lived from 3000 years ago until 200 years ago through Europe and the Mediterranean … The project has taken 8 years and $1.2 million to organize so far.

The longest term trends we can see clearly forecast growth in the total capacity and power of humanity and its descendants. But this does not imply growth in the quality of individual lives.  While individual lives may have improved on average over the last two hundred years, over longer timescales we have seen sustained and substantial declines. 

Looking to the future, we can have far more confidence in a continued growth in total capacity than in improved quality of individual lives.  If, like me, you count the vast increase in the number of lives worth living as a grand and glorious thing, you'll think the future a better place even if individual lives get somewhat worse.  If, like many others, you care little about creatures who do not yet exist, you can reasonably think the future will be a worse place. 

GD Star Rating
loading...
Tagged as:

Future Incompetence

The book Human Enhancement is finally out.  My chapter is second to last, just after a thoughtful one by Daniel Wikler:

It is often observed that mildly or even moderately retarded people do not seem dull to themselves as long as they stay on the farm (rather: certain farms), but become so immediately when they move to the city.  Here the relative difference between a dull or not-very-bright minority and the majority who are just below average, or better, becomes important, and as that majority arranges society to suit themselves, their less-bright peers become incompetent. …

Those who are rendered incompetent in this manner need supervision, and in order to protect them in that now-dangerous environment, their rights are taken away.  Humane regimes strive to protect as uch of their range of free choice as possible, consistent with the need to protect them from serious or irremediable harm (and to protect others), but there is no supposition that everyone has a natural, inalienable right to self-determination  that would rule out all configurations of the social and physical environment that are disadvantageous to the less-talented. …

What, then, would be the effect of selective enhancement of intellectual capacity – that is, enhancement of some, but not all – for the social and political world that we "normals" would inhabit?  Would it erode the foundations of egalitarianism, undermining the claims of many who now hold title ans citizens to that equal status?  Would those made or engineered to be born smart be within their rights to deprive the rest of our rights, presumably with a humanitarian intent?  In a word: yes. … 

Should we be eternally vigilant and suspicious of people who appoint themselves "guardians", profess humanitarian motives, and then take over our lives?  Or do the shoes just hurt because they would be on our feet?

This is a great test case for paternalists; if you feel that your superior minds justify ruling the lives of others, would do you accept having your life ruled by future folk with greatly enhanced minds?

GD Star Rating
loading...
Tagged as: ,

The Pascal’s Wager Fallacy Fallacy

Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models.  The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.

So I observed that:

  1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken.  (I go into some detail on this possibility below the cutoff.)
  2. If it is possible for an agent – or, say, the human species – to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics.  The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life."  And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large.  This is not where the flaw in the reasoning comes from.  That is not the problematic step.  The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).

Continue reading "The Pascal’s Wager Fallacy Fallacy" »

GD Star Rating
loading...

Break Cryonics Down

The essence of analysis is to "break it down", to take apart vague wholes into clearer parts.  For the same reasons we make point lists to help us make tough job decisions, or ask people who sue for damages to name an amount and break it into components, we should try to break down these important social claims via simple calculations.  And the absence of attempts at this is a sad commentary on something. [Me last July]

Imagine you disagreed with someone about the fastest way to get from your office to Times Square NYC; you said drive, they said fly.  You broke down your time estimates for the two paths into part estimates: times to drive to the airport, wait at the airport, fly, wait for a taxi, ride the taxi, etc.  They refused to offer any component estimates; they just insisted on confidence in their total difference estimate. 

Similarly imagine some someone who disagree about which of two restaurants was better for a certain group, but wouldn't break that down into who would like or dislike what aspects of the two places.  Or imagine someone who claimed their business plan would be profitable, but refused to break this down into how many of what types of units would be sold when, or what various inputs would cost.  Or someone who said US military spending was worth the cost, but refused to break this down into which enemies were how discouraged from what sorts of damage by that last spending increment.

Such silent disputants reject our most powerful tool for resolving disagreements: analysis – breaking vaguer wholes into clearer parts.  Either they have not used this tool to test or refine their estimates, or they are not willing to discuss such parts with you.  I felt Tyler made this analysis-blocking move in our diavlog:

Continue reading "Break Cryonics Down" »

GD Star Rating
loading...
Tagged as: ,

My Cryonics Hour

To encourage people to sign up for cryonics, I've offered to debate influential bloggers on the subject.  Spurred by recent successes, and failures, I'll up the ante:

I hereby offer to talk for one hour on any subject to anyone who can show me they've newly signed up for cryonics.  You can record the conversation, publish it, and can sell your time to someone else. 

Yes, I know, this may not exactly be a huge incentive to most people, but its what I have to offer. 

Added: The Blogging Heads TV folks are interested in a cryonics debate, if that tips any of you influential bloggers over the line.

GD Star Rating
loading...
Tagged as: , ,

More Getting Froze

Eliezer and I posted last fall on cryonics, and someone connected with the cryonics firm Alcor recently told us that 7-8 recent signing-up customers, a notable fraction of the total, mentioned Eliezer, I, or these posts!  OB reader Fortune Elkins was apparently also instrumental. 

I'm proud to have had some influence, though it is still sad that the numbers are so low that our modest effort could make such a difference.  I'll post more on cryonics soon.

GD Star Rating
loading...
Tagged as:

…And Say No More Of It

Followup toThe Thing That I Protect

Anything done with an ulterior motive has to be done with a pure heart.  You cannot serve your ulterior motive, without faithfully prosecuting your overt purpose as a thing in its own right, that has its own integrity.  If, for example, you're writing about rationality with the intention of recruiting people to your utilitarian Cause, then you cannot talk too much about your Cause, or you will fail to successfully write about rationality.

This doesn't mean that you never say anything about your Cause, but there's a balance to be struck.  "A fanatic is someone who can't change his mind and won't change the subject."

In previous months, I've pushed this balance too far toward talking about Singularity-related things.  And this was for (first-order) selfish reasons on my part; I was finally GETTING STUFF SAID that had been building up painfully in my brain for FRICKIN' YEARS.  And so I just kept writing, because it was finally coming out.  For those of you who have not the slightest interest, I'm sorry to have polluted your blog with that.

When Less Wrong starts up, it will, by my own request, impose a two-month moratorium on discussion of "Friendly AI" and other Singularity/intelligence explosion-related topics.

There's a number of reasons for this.  One of them is simply to restore the balance.  Another is to make sure that a forum intended to have a more general audience, doesn't narrow itself down and disappear.

But more importantly – there are certain subjects which tend to drive people crazy, even if there's truth behind them.  Quantum mechanics would be the paradigmatic example; you don't have to go funny in the head but a lot of people do.  Likewise Godel's Theorem, consciousness, Artificial Intelligence –

The concept of "Friendly AI" can be poisonous in certain ways.  True or false, it carries risks to mental health.  And not just the obvious liabilities of praising a Happy Thing.  Something stranger and subtler that drains enthusiasm.

Continue reading "…And Say No More Of It" »

GD Star Rating
loading...

The Thing That I Protect

Followup toSomething to Protect, Value is Fragile

"Something to Protect" discursed on the idea of wielding rationality in the service of something other than "rationality".  Not just that rationalists ought to pick out a Noble Cause as a hobby to keep them busy; but rather, that rationality itself is generated by having something that you care about more than your current ritual of cognition.

So what is it, then, that I protect?

I quite deliberately did not discuss that in "Something to Protect", leaving it only as a hanging implication.  In the unlikely event that we ever run into aliens, I don't expect their version of Bayes's Theorem to be mathematically different from ours, even if they generated it in the course of protecting different and incompatible values.  Among humans, the idiom of having "something to protect" is not bound to any one cause, and therefore, to mention my own cause in that post would have harmed its integrity.  Causes are dangerous things, whatever their true importance; I have written somewhat on this, and will write more about it.

But still – what is it, then, the thing that I protect?

Friendly AI?  No – a thousand times no – a thousand times not anymore.  It's not thinking of the AI that gives me strength to carry on even in the face of inconvenience.

Continue reading "The Thing That I Protect" »

GD Star Rating
loading...