Breeding happier livestock: no futuristic tech required

I talk to a lot of people who are enthusiastic about the possibility that advanced technologies will provide more humane sources of meat. Some have focused on in vitro meat, a technology which investor Peter Thiel has backed. Others worry that in vitro meat would reduce the animal population, and hope to use futuristic genetic engineering to produce animals that feel more pleasure and less pain.

But would it really take radical new technologies to produce happy livestock? I suspect that some of these enthusiasts have been distracted by a shiny Far sci-fi solution of genetic engineering, to the point of missing the presence of a powerful, long-used mundane agricultural version: animal breeding.

Modern animal breeding is able to shape almost any quantitative trait with significant heritable variation in a population. One carefully measures the trait in different animals, and selects sperm for the next generation on that basis. So far this has not been done to reduce animals’ capacity for pain, or to increase their capacity for pleasure, but it has been applied to great effect elsewhere.

One could test varied behavioral measures of fear response, and physiological measures like cortisol levels, and select for them. As long as the measurements in aggregate tracked one’s conception of animal welfare closely enough, breeders could easily generate immense increases in livestock welfare, many standard deviations, initially at low marginal cost in other traits.

Just how powerful are ordinary animal breeding techniques? Consider cattle:

In 1942, when my father was born, the average dairy cow produced less than 5,000 pounds of milk in its lifetime. Now, the average cow produces over 21,000 pounds of milk. At the same time, the number of dairy cows has decreased from a high of 25 million around the end of World War II to fewer than nine million today. This is an indisputable environmental win as fewer cows create less methane, a potent greenhouse gas, and require less land.

 Wired has an impressive chart of turkey weight over time:

New_sweet_chart

Anderson, who has bred the birds for 26 years, said the key technical advance was artificial insemination, which came into widespread use in the 1960s, right around the time that turkey size starts to skyrocket…

This process, compounded over dozens of generations, has yielded turkeys with genes that make them very big. In one study in the journal Poultry Science, turkeys genetically representative of old birds from 1966 and modern turkeys were each fed the exact same old-school diet. The 2003 birds grew to 39 pounds while the legacy birds only made it to 21 pounds. Other researchers have estimated that 90 percent of the changes in turkey size are genetic.

Moreover, breeders are able to improve complex weighted mixtures of diverse traits:

The bull market (heh) can be reduced to one key statistic, lifetime net merit, though there are many nuances that the single number cannot capture. Net merit denotes the likely additive value of a bull’s genetics. The number is actually denominated in dollars because it is an estimate of how much a bull’s genetic material will likely improve the revenue from a given cow. A very complicated equation weights all of the factors that go into dairy breeding and — voila — you come out with this single number. For example, a bull that could help a cow make an extra 1000 pounds of milk over her lifetime only gets an increase of $1 in net merit while a bull who will help that same cow produce a pound more protein will get $3.41 more in net merit. An increase of a single month of predicted productive life yields $35 more.

No futuristic technologies needed: just feed accurate enough measurements of animal welfare into the net merit equation and similar progress could begin on the new trait. So why do some animal activists focus on a radical future genetic engineering while the already-existing mundane version goes unused, in part (see Katja’s post) because of opposition from fellow animal advocates?

Added December 8th:

Gaverick Matheny reports that some breeds have been selected in part for welfare. However, because breeders have not yet finished optimizing farm animals for productivity, the opportunity cost of not increasing productivity even further instead has been too high, given weak market and other pressures for welfare improvement, for this to take off.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Drewfus

    “One carefully measures the trait in different animals, and selects sperm for the next generation on that basis.”

    How does one measure the trait of happiness?

    “Modern animal breeding is able to shape almost any quantitative trait with significant heritable variation in a population.”

    When breeders selected Siberian foxes on the basis of domestic traits like tameness, they did indeed get tameness, but also got the retention of juvenile traits by adult foxes. http://en.wikipedia.org/wiki/Domesticated_silver_fox What other traits would happy cows have – low intelligence perhaps? I doubt the opposite.

    “So why do some animal activists focus on a radical future genetic engineering while the already-existing mundane version goes unused…?”

    Good question. This turning down of simple, elegant, low-tech solutions in favor of the opposite seems common. Going off on a wild tangent; why does NASA spend such huge sums on cutting-edge tech trying to find life on Mars, when a simpler quest would be to attempt to grow stuff on Mars? Take along a few gallons of water and some seeds, inject the seeds into the ground, dump the water on top, then wait to see what happens. If they can’t grow anything on Mars, why suppose there is life to be found there?

    • dmytryl

      You don’t need to take anything to mars to check if it can grow there, though. Just simulate environment in lab.

    • Hedonic Treader

      How does one measure the trait of happiness?
      Maybe stress hormone levels, analysis of behavior, responses to stimuli, vocalizations, possibly even brain scans?

      What other traits would happy cows have – low intelligence perhaps?
      As long as it doesn’t conflict too strongly with the productivity goals, it doesn’t matter. The industry could sell it as low cruelty.

      It’s a great idea, but it probably won’t happen. If possible, the industry doesn’t want to talk about animal suffering at all. The public reacts negatively to manipulation and exploitation symbols (“Oh no! It’s Brave New World for cows!”). And marginal increases on fuzzy animal welfare measures are hard to sell as real improvements, even if, quantitatively speaking, they are.

      Then again, it avoids the cross-species genetic engineering stigma.

    • Muga Sofer

      NASA already knows there are extremophiles that can survive off earth, they’re looking for native life. Y’know, to study it?

      And various ways of measuring “pain” were mentioned in the article.

    • Drewfus

      @dmytryl – That might be possible if the environment is already well known. The atmosphere would be. Sun radiation too. Temperature distribution. The ‘earth’ – to some extent. Not possible for gravity – perhaps irrelevant? So what was or would be the outcome of attempting this simulation? A multi-generational experiment that moved from near Earth-like conditions to approximate Mars-like conditions might be the most fruitful. However, a negative outcome would not bode well for missions to Mars to search for life. Perhaps there is a bit of motivated ignorance in this regard?

      @Hedonic Treader “The industry could sell it as low cruelty.”

      The industry could sell it, or you would like to think it could?
      Low Cruelty Beef – sounds odd, and might only invite a backlash against the industry because of what it implies about the other stuff.

      @Muga Sofer – A prediction market with odds of 500-1 on a billion dollar project to search for life on a minute patch of Mars might put the desire to study it in perspective. This never-ending quest to find life on Mars is starting to look a bit silly. At least the proven ability to grow plants on Mars would indicate some long-term benefit in continuing to explore there, with a view to establishing human settlements.

      “And various ways of measuring “pain” were mentioned in the article.

      Suggesting a reduced capacity to feel pain as a proxy for happiness is laughable. Any animal with a lessened capacity to feel pain becomes a danger to itself, and in no way would this reduced capacity for pain be a proxy for greater wellbeing or a higher propensity for happiness – indeed the opposite would be true.

      • dmytryl

        Gravity would be irrelevant to the single celled life, and anything really small for that matter.

        I don’t think it really is that interesting to do. Mars is still hot deep down, and there are extremophiles living in the Earth, literally in the rock, way down, independent of sunlight. So it is pretty much known that Earth life would survive somewhere on Mars, there’s life in extreme Earth environments that exist the same on Mars. The interesting question is if there’s life on mars, and if there is, does it share the origin, or does it use some entirely different genetic material. If we find other-origin life, this rules out the speculation that emergence of life is incredibly unlikely (note that the probability of emergence of life on earth, for all we know, could range from something obscenely unlikely such as 10^-1000 to nearly 1)

      • Drewfus

        Okay, i see the point, but it does seem a bit odd to me to be still investigating the highly unlikely probability of life on Mars, partly to prove that the emergence of life in the universe is not incredibly unlikely – or otherwise to not find life on Mars and leave the huge range of estimates for life emerging in the universe largely untouched. Gambling big money over pseudo-scientific speculation doesn’t sound worthwhile. Proving there was ‘a second genesis’ doesn’t sound too interesting to an atheist, at least not much more interesting than finding extremophiles on Earth. How could it be?

      • dmytryl

         Why would life on Mars be incredibly unlikely, again?

        WRT the second origin, a different tree of life would almost certainly be of much practical use. It would also be very informative of life elsewhere in the galaxy, and via the Fermi paradox, somewhat informative of our own probable fate. Fermi paradox is so uninformative because the simplest evolvable self replicating life we know is incredibly complex and abiogenesis of such life is incredibly improbable (i.e. would happen only once in a volume much larger than observable universe).

      • Drewfus

        “Why would life on Mars be incredibly unlikely, again?”

        1. Decades after Viking there is no sign of it.
        2. Life has not been sustained in Mars-replicating conditions on Earth.
        3. No one has bothered to create a prediction market for it, suggesting the odds would be so long that no interest would be generated.

        I agree with what you say about a second known tree of life. Unanonymously liking your comment…

        Re your edit – yeh i agree that intelligent systems produce outputs that are not predictable from the inputs, but that doesn’t differentiate that feature of the black box of intelligence from a randomization function.

        What i think is going on with intelligent systems is relatively large amounts of input pre-processing. In fact i would not classify intelligent systems as being synonymous with the input -> processing -> output model at all (serial or parallel processing) – that is the host (system). Intelligence is a value-adding feature of the host. Its purpose is to constrain, limit or compress inputs, without compromising outputs. Humans have huge pre-processing capabilities, but our processing capabilities are meagre – often no greater than other primates and sometimes worse. Regarding the Singularity, the point when computers have processing abilities to rival humans, that has already occured – circa 1950. An example of limiting inputs in a non-threating manner is seen in prehistoric cave paintings of animals and outlines of body parts, like hands. What these people were doing is training their minds to recognize objects with greatly reduced inputs. All intelligence follows this pattern.

      • dmytryl

        1:We only looked at surface, where there can’t be a lot of it.
        2:Mars has a variety of conditions, and some of those are similar to some conditions on Earth (e.g. deep down) where we do find life.
        3:Not even an argument IMO. You could as well argue that something isn’t true because nobody made a commercially successful computer game with that as a concept.

        4: My point was that it is not easy to assign adequate worth to the knowledge you haven’t yet acquired. The simplistic ‘evaluate expected utility’ approach is not effective. Observe certain “rationalist” community for example. Reasonable approach when you want to give money away productively: demand a signal such as non handwavy accomplishments, it’s cheap for those that can do something, not worth it for sociopaths, plus for people with such accomplishments its not worth it scamming you. You acquire information to decide. The ‘rationalist’ approach: don’t try to acquire information, sit n ponder the probability, which they can’t do, so you get some big Bell curve of estimates and the tail end donates. Like nerds (not) making smalltalk: never asking anyone, etc.

      • Drewfus

        “My point was that it is not easy to assign adequate worth to the knowledge you haven’t yet acquired. The simplistic ‘evaluate expected utility’ approach is not effective.”

        Yes, i agree.

        The Thesis > Antithesis > Synthesis paradigm is powerful, but does not necessarily support maximizing utility at each stage. Instead it supports the idealization of certain benefits at each stage, while ignoring the costs. The synthesis eventually becomes the new thesis. The French Revolution idealized the overthrown of absolute monarchy, at the cost of many severed heads and general violence. Ultimately the violence was overcome too. Trying for both at once will fail. (Iraq/Afganistan?)

        A generalization of the T > A > S concept is the idea of a ‘stepping stone’. The best step to take can often only be calculated in hindsight, but the imperative to ‘step’ is unavoidable – remaining at the starting position leads to an unavoidable downfall. The concept of commitment is relevant. Perhaps define a living organism as; An agent encumbered by unnegotiable commitments.

  • http://twitter.com/creedofhubris Frederic Bush

    In a similar fashion, death penalty protesters should alter the focus of their efforts to making sure that prisoners are properly entertained before their executions.

    • Hedonic Treader

      Are all of the people who “liked” this really vegans? This argument would make sense in a society with laws and customs that are already strongly anti-speciesist, or has a realistic change of actually getting there.

      There is no such society on the planet now.

    • Carl Shulman

      Note the first paragraphs: this is a response to the views of people who are excited about genetic engineering, but seem unaware of or never mention existing ways to achieve the result in question. One reason they give is that a world of in vitro meat or vegetarianism would have fewer chickens, cattle, etc, in it, and they prefer animal lives worth living to the absence of animals.

      If those statements don’t describe you, then the tension mentioned in the post need not apply to your views.

      • http://twitter.com/creedofhubris Frederic Bush

        Hmm, perhaps I misread the focus of your piece. You are not addressing the people who want non-sentient meat, you are addressing the people in your third sentence, the ones who are worried about fewer animals? 

        That seems like a fairly obscure position to be discussing (I am not familiar with it at all) especially without a citation. 

      • Michael Vassar

        HUH?  One needs a citation in order to ADDRESS a position?!?

      • http://juridicalcoherence.blogspot.com/ srdiamond

        HUH?  One needs a citation in order to ADDRESS a position?!?

        No, the (valid) point is that he needs a citation if he is to survive the challenge that he is arguing a straw man.

    • http://juridicalcoherence.blogspot.com/ srdiamond

      In a similar fashion, death penalty protesters should alter the focus of their efforts to making sure that prisoners are properly entertained before their executions.

      The U.S. Supreme Court in Baze v. Reez (2008) in fact made the painfulness of capital punishment methods the issue under the 8th Amendment. This was widely criticized by both sides, but I’ve argued it is exactly the right focus for review. (See “Moralism: The State Bar, Capital Punishment, Euthanasia, and Suicide” — http://tinyurl.com/yzfpnmw )

  • JonatasM

    That’s a good idea, Carl. However, we should focus on all fronts, genetic alteration, cultured animal products (not just meat, but also milk, eggs, leather…), and meat substitutes such as Beyond Meat. So if any of these don’t work on time we’ll still have an alternative. Cultured meat seems to be the best in the long-term anyway, because creation of farm animals is wasteful and primitive. Beyond Meat will probably not be able to substitute farm animals, but it may make vegetarianism more palatable in the short-term. It isn’t clear if genetic alteration would be accepted, neither by producers nor by animal activists, so it may not be a sure bet. People are also unreasonably paranoid about genetic modification of food, so who knows how they might react to that… I think it should definitely be tried, though.

  • David Pearce

    A note on language. By referring to other sentient beings as “livestock”, we endorse the property status of nonhuman animals. Use of the term “animals” as distinct from humans is pre-Darwinian: “Human and nonhuman animals” is more cumbersome but less loaded. Even my restrictive use of the term “we” to refer to humans rather than all sentient beings expresses an insidious anthropocentric bias.

    More substantively, I agree with Carl: breeding happier, less pain-ridden nonhuman [and human] animals is feasible now with existing technologies. Nonhuman animals on factory farms are so distressed they have to be prevented physically from mutilating themselves via debeaking, tail docking, (unanaesthetised) castration, etc. Any technology that reduces the burden of suffering in enslaved nonhuman [and human] animals is to be welcomed.

    More fundamentally, in what circumstances is one ethically entitled to harm another sentient being? What ethical weight should we give the plea of “But I like the taste” to the killing, harming or abuse of human and nonhuman animals? Natural science teaches us to aspire to a “view from nowhere”, an impartial God’s-eye-view. Presumably, a notional full-spectrum  superintelligence could impartially access and weigh all possible first-person perspectives, e.g. factory-farmed pigs and a meat-eating human diners, and act accordingly. Posthuman superintelligence is allegedly imminent; transhumanists sometimes worry about [human-]Friendly AGI. Yet is the notion of distinctively Human-Friendly AGI even intellectually coherent? (compare “Aryan-friendly AGI” or “Cannibal-friendly AGI”). Rather than aiming at humane exploitation, I think we should be aiming for high-tech Jainism, i.e. overcoming anthropocentric bias. In short, should humans devote our efforts to finding more humane ways to exploit other sentient beings, or instead finding ways to help them?

    • Michael Vassar

      What does the math say about what we should do, with plausible values plugged into the relevant variables?  I wish that I could find more people who actually advocated the consequences and implications of their principles.

      • David Pearce

        Bringing mathematical rigour to the discipline of ethics is admirable. Alas I’m not sure we can avoid first doing some philosophical spadework. In default of accurate neuronscanning, let us provisionally assume that any captive human or nonhuman animal that has physically to be prevented from mutilating himself or herself is profoundly distressed. If we are classical utilitarians, then it does’t take a four-sigma level IQ to calculate that the fleeting pleasures of the dinner table do not ethically outweigh the horrors that went into its production.Carl’s proposal, if implemented, would mitigate but not eliminate these horrors. No doubt life presents difficult ethical dilemmas that call for fine calculations and fancy maths; shutting down factory farms isn’t one of them.

      • Michael Vassar

        I disagree.  Properly implemented, the proposal would prevent, for instance, the need for physical restraints to self-mutilation.  What lives are worth living is a profoundly difficult question, but while we are waiting, we can and should try to make as many as possible of the lives that will occur be as pleasant as possible, which could potentially mean very very pleasant.  

      • David Pearce

        Michael, as you know I’ve long advocated selective breeding of human and nonhuman animals for extreme happiness. This aspect of Carl’s proposal strikes me as admirable. But what’s lacking is any kind of acknowledgement that sentient beings should not be treated as property. By analogy, if we lived in in a slaveholding society, then a white slave-owner might argue for selectively breeding happier slaves rather than for emancipation. But you would (I trust) argue that the time for emancipation is now.

    • http://juridicalcoherence.blogspot.com/ srdiamond

      Natural science teaches us to aspire to a “view from nowhere”, an impartial God’s-eye-view.

      Alas, you must truly believe that morality forms a part of natural science. 

      • David Pearce

        Whether value can be naturalised is a distnct question from the nonmoral instrumental ought cited above, i.e. insofar as we want to understand the properties of the natural world, we ought to abandon preferred or privileged reference frames.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Well, what does abandoning privileged reference frames in understanding the natural world have to do with including the welfare of animals in “morality”? Nothing—unless you think morality is part of natural science.

      • David Pearce

        A unified science must ultimately provide an account of the origin of our normative concepts. Insofar as I know anything at all, I know my agony is bad for me. Natural science suggests that no here-and-nows are ontologically special. The agonies of other subjects of experience of comparable sentience elsewhere seem less significant than mine; but this is an epistemological limitation on my part, not some deep truth about the world.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        So, you admit what you had denied: that you assign moral importance to animals because you believe morality is part of natural science.

        It’s true that any specialness we accord humans isn’t a deep feature of the natural world. But neither is any concern for “sentient” creatures in general, rendering the first point otiose.

        By the way, how do you define “sentient”? Are insects, for example, “sentient”? Just curious.

      • David Pearce

        Srdiamond, forgive me, but you missed my point, namely that the meta-ethical realist and anti-realist alike can use “ought” instrumentally. Meta-ethical antirealists who want to understand the world are not debarred from using normative concepts; it would be hard to do science without them.

        A much stronger claim would be that a God-like superintellect with perfect knowledge of all possible first-person perspectives could not behave unethically. Perhaps imagine a generalisation of mirror touch-synaesthesia:
        http://www.livescience.com/1628-study-people-literally-feel-pain.html
        A convergence hypothesis would run counter to the orthogonality thesis assumed in most discussions of AGI.

        “Sentience”? Just the standard sense of the term found in any dictionary, i.e. the ability to feel, perceive, or be conscious, or to have subjective experiences: http://en.wikipedia.org/wiki/Sentience
        The ganglia of insects may well experience phenomenal pain. But in the absence of a central nervous system, a unitary subject of experience would seem impossible. Thus the head of some species of locust can carry on feeding while the tail segment is being devoured. Antispeciesism is not the doctrine that “all species are equal”, rather that it is irrational to discriminate against beings of comparable sentience merely on grounds of species membership. 

      • http://juridicalcoherence.blogspot.com/ srdiamond

        We’re talking past each other on meta-ethics. But let me move to the other point, which is particularly interesting. Let’s assume we want an ethics like a scientific theory: parsimonious, without ad hoc exception. I strongly disagree this is a fundamental ethical-theory desideratum (that’s what I take our earlier discussion to be about), but let’s grant the point hypothetically. Then your argument is that singling out humans is ad hoc.

        But why is it not ad hoc to single out beings who are a “unitary subject of experience.” Humanity is probably not a natural kind (the best thinking, in my opinion, is that species are really individuals) but surely sentience isn’t a natural kind either, and it lacks even sharp boundaries (or a natural metric of degree, if you hope to speak of “comparable sentience”).

        Antispeciesism is not the doctrine that “all species are equal”, rather that it is irrational to discriminate against beings of comparable sentience merely on grounds of species membership.

        Then the obvious reply against “animal rights” is that there are no species whose sentience is comparable in degree to humans.’

        (And if sentience is really the basis, then what of the rights of idiots.)

      • David Pearce

        There is nothing ad hoc about prioritising unitary subjects of experience because there is no ontological integrity to, say, a dozen ganglia each in its only body segment. Utopian technology may one day deliver the well-being of all ganglia and even individual nerve cells; but this happy day is a long way off.
         
        Sadly, the claim that no species exists whose sentience is comparable in degree to humans is unsupported. Microelectrode studies confirm that the most intense forms of sentience are mediated, not by e.g. the brain structures supporting generative syntax, but by the evolutionarily ancient limbic system mediating our core emotions. What grounds have we for supposing that the larger limbic system of, say, a sperm whale supports less intense consciousness? 
         
        Less controversially, a pig is of comparable sentience – and for what it’s worth, intelligence – to a two-year-old prelinguistic human toddler. Insofar as prelinguistic toddlers deserve love, care and respect, so do pigs. And to the counterargument that only toddlers have the “potential” to become adult humans, we may recall that toddlers with a progressive disorder (and thereby lacking in cognitive potential) are not thereby reckoned less worthy of love, care and respect. Likewise pigs – on pain of arbitrary anthropocentric bias.
         

      • http://juridicalcoherence.blogspot.com/ srdiamond

        There is nothing ad hoc about prioritising unitary
        subjects of experience because there is no ontological integrity to,
        say, a dozen ganglia each in its only body segment.

        That’s
        what I’m trying to ask you: In what consists this ontological
        integrity. I would think it would need to be some kind of metaphysical
        or natural kind, but sense of unity is a matter of degree and is without
        apparent measure.

        What grounds have we for supposing that the larger limbic system of, say, a sperm whale supports less intense consciousness?

        Same question. What is the ontology of this “consciousness” that you apparently equate with “sentience.” How can sentience be “intense”? Are we to understand what this means based on personal subjective experience, or do you have an objective definition?

      • David Pearce

        When one rapidly withdraws one’s hand from a hot stove, the hand withdrawal often precedes the pain. This doesn’t entail that pain didn’t play a causal role in my hand withdrawal: maybe peripheral nerve ganglia experience raw micro-pain. But if so, it’s encapsulated. My CNS does not have direct access to peripheral nociceptors. So there is no “ontological integrity” in play here. Likewise with the locust I alluded to above whose head continues feeding while the tail is being devoured by a predator; there is no unitary subject of experience whose interests deserve consideration, just the experiences of its constituent nerve ganglia. 
         
        How 80 billion odd interconnected but discrete quasi-classical neurons in the CNS solve the phenomenal binding problem (cf. http://lafollejournee02.com/texts/body_and_health/Neurology/Binding.pdf) and generate bound objects and a fleeting synchronic unitary self is a very deep problem that IMO cuts to the heart of Moravec’s paradox, explains why classical digital computers will never be nontrivially conscious, and rules out mind uploading. But this discussion would take a long way from the morally urgent question of factory farming. 
         
        Sentience? Well, the ganglia of a roundworm are minimally sentient; a pig or a toddler intensely so. Within the individual, any seasoned user of psychoactives will report interventions that either dim or amplify the intensity of experience. But the ethically relevant point is that the molecular substrates of our most intense experiences are far removed from e.g. the mutations of the FOXP2 gene that helped give rise to the generative syntax that “makes us human”. Bentham put it well over 200 years ago: “The question is not, Can they reason? nor, Can they talk? but Can they suffer?” 
         
         

      • dmytryl

        David Pearce: could be pain-induced conditioned reflex, on the signals that get to brain ahead of pain. For burn pain, the heat has to traverse through the material of high thermal capacity and low thermal conductivity. You’ll feel warmth first, maybe other tactile sensations from the vapourizing water at surface, and only when the nerve endings exceed some temperature you’ll feel pain. When grabbing on something hot unknowingly, there’s a lot more delay when you touch something that may be hot.

      • Cyan

        David Pearce: Nociceptors don’t suffer — they send signals, just like other neurons. To suffer, one needs a functioning insular cortex; lesions there can cause pain asymbolia, a condition in which pain has no aversive aspect even though those affected* are aware of pain location and intensity.

        * I almost called them “sufferers”, but that’s exactly what they aren’t.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        David Pearce,

        The link you supplied is broken, and the account you provide unilluminating. You provide example of “ontological integrity,” but that doesn’t help me understand what it is. I could have guessed at your example; but my question is what makes you think this distinction is ontological. The one illuminating and responsive sentence is “Within the individual, any seasoned user of psychoactives will report interventions that either dim or amplify the intensity of experience.” This is the only information you provide about what you mean by intensity of experience. As I suggested, it’s a subjective criterion. Still, it’s not clear what constitutes it.

        But understanding over this wide a philosophical divide is hard. To let me try to reduce it to a simpler, concrete question. What does the experience of taking LSD have to do with the size of the limbic system? That is, why do you think whales have more intense experiences than humans, when intense means the same thing as it means if an LSD-taker says, “Wow, that was intense!”?

      • Tim Tyler

        > Alas, you must truly believe that morality forms a part of natural science.

        Natural science *studies* morality.

      • David Pearce

        Srdiamond, apologies, I didn’t want to get sidetracked into deep philosophical waters rather than focus on the morally urgent issue underlying Carl’s post. Ontological integrity?@google-a6523c96cdda9eea0afe282c76640b73:disqus What is the difference between a mere structured aggregate of “mind dust” and a unitary subject of experience? What distinguishes a split-brain patient with a severed corpus collosum – or an invertebrate with multiple ganglia in its different segments – from a brain that supports a fleetingly unitary phenomenal self? Dreamless sleep aside, why doesn’t 
        http://en.wikipedia.org/wiki/Mereological_nihilism 
        hold for the 80 billion odd seemingly discrete classical neurons of the vertebrate brain? 
         
        Intensity of experience? Establishing the neural correlates of consciousness is challenging for an agent like LSD which is undetectable three hours after ingestion. So let’s focus on phenomenal pain. Nonsense mutations of the SCN9A gene completely abolish the capacity to experience phenomenal pain. Other alleles are associated with an unusually high or unusually low intensity of pain experience in response to a given noxious stimulus: 
        http://www.ncbi.nlm.nih.gov/pubmed/20212137. On Carl’s proposal we would presumably want to ensure nonhuman factory-farmed animals have ultra-low pain alleles of SCN9A – if not nonsense mutations. A strong ethical case can be made for preselecting benign low-pain alleles of SCN9A via PGD for future humans: 
        http://clinicaltrials.gov/ct2/show/NCT01507493 
         
        How can we appraise the comparative severity of phenomenal pain in sperm whales [and pigs etc] and humans? Although we should be wary of facile “sizism”, presumably the possession of a “pain centre” with 100,000 interconnected neurons might, other things being equal, be expected to yield a maximum intensity of pain greater than the agony ceiling of a pain centre of 25,000 structurally and functionally identical interconnected neurons with the same gene-expression profile, etc. 

        Yes, there are a host of complications here. I was simply leaving the question open – and I very much hope I’m wrong.
         

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Yes, there are a host of complications here. I was simply leaving the question open – and I very much hope I’m wrong.

        Wrong about what? I’m not sure I understand. Wrong about concluding that we should maybe care more about whale pain than human pain?

        Although we should be wary of facile “sizism”, presumably the possession of a “pain centre” with 100,000 interconnected neurons might, other things being equal, be expected to yield a maximum intensity of pain greater than the agony ceiling of a pain centre of 25,000 structurally and functionally identical interconnected neurons with the same gene-expression profile, etc.

        I don’t see any compellingness to the inference from size to phenomenal intensity. Do you really think it takes a lot of neurons to generate high intensity. Or more to the point, that the function of all those neurons is to intensify the phenomenon rather than to distinguish variations?

        I think that by pretty much the only coherent use of language, an ant being killed by burning with a cigarette is in excruciating pain. How does the phenomenology compare to human pain? Is it less pain because the ant doesn’t perceive itself as a unitary being? What makes you think pigs or whales do? Perhaps thinking of oneself as a unitary being is a result of a kind of reflective ability (at the same time riddled with cognitive error) that nonlinguistic animals lack?

        But most to the point, what grounds do you have for thinking there’s such a thing as “phenomena”? (See my e supposedly hard problem of consciousness and the nonexistence of sense data: Is your dog a conscious being?”http://tinyurl.com/bhbru8l

      • David Pearce

        sirdiamond, apologies for any confusion. I was simply raising the possibility that nonhuman animals with larger pain centres than humans feel pain more intensely. I agree that generating pain of high intensity doesn’t take a lot of neurons. But other things being equal – and depending on how the brain solves the binding problem – then 10,000 interconnected pain-processing neurons can presumably generate a greater phenomenal intensity of suffering than 1000 interconnected pain-processing neurons. More generally, the greater diversity of phenomena that mature humans can feel distressed or miserable about compared to nonhuman animals seems to be a function of the projections of neurons from the limbic system to the neocortex rather than any greater diversity of neuronal cell types in the limbic system itself. 
         
        Ant pain? I certainly favour erring on the side of caution. See e.g. 
        http://www.utilitarian-essays.com/insect-pain.html  
        But whereas phasing out suffering in higher vertebrates is feasible with existing technologies, doing the same for insect pain will require the utopian technology of our successors next century(?) and beyond – and a momentous ethical revolution to match.

      • David Pearce

        srdiamond, I certainly wouldn’t dismiss Moore’s Open Question argument. Indeed, on the face of it, the argument is decisive. Whatever apparently dreadful or wonderful phenomenon-exists in the natural world, one can still ask if it’s (dis)valuable. However, what is not an open question, at least for me, is whether my unbearable distress is disvaluable for me. And I’m arguing that it’s only an epistemological limitation on my part, not some deep ontological truth about the world, that leads to any failure in my recognition that ( it is objectively the case that) your unbearable distress is disvaluable too. The badness of your agony is not an open question to a mirror-touch synaesthete – or a God-like superintelligence who could apprehend all possible first-person perspectives.

        A counterargument might be that the existence of (dis)value in the world is inconsistent with the naturalistic third-person ontology of physical science. If eliminativist materialism were the case, this would be so. However, we’re not zombies: first-person facts, not least the existence of (dis)valuable experiences, don’t possess some sort of second-rate ontological status. If we assume Strawsonian physicalism, they are as much a part of the natural world as the rest mass of the electron. I can’t define the normative aspect of disvaluable experience in terms of anything more semantically primitive. But if, for example, you try and hold your hand in ice-cold water for as long as you can, the experience is not motivationally inert. What property of the experience causes you to withdraw your hand?

        Anyhow, this philosophising takes us a long way from the morally urgent question of nonhuman animal suffering. The worst source of severe and readily avoidable misery that exists in the world, today, i.e. factory-farming, is wholly manmade. Unless one is a complete moral nihilist, we have an obligation to stop it.

      • David Pearce

        srdiamond, I certainly wouldn’t dismiss Moore’s Open Question argument. Indeed, on the face of it, the argument is decisive. Whatever apparently dreadful or wonderful phenomenon-exists in the natural world, one can still ask if it’s (dis)valuable. However, what is not an open question, at least for me, is whether my unbearable distress is disvaluable for me. And I’m arguing that it’s only an epistemological limitation on my part, not some deep ontological truth about the world, that leads to any failure in my recognition that ( it is objectively the case that) your unbearable distress is disvaluable too. The badness of your agony is not an open question to a mirror-touch synaesthete – or a God-like superintelligence who could apprehend all possible first-person perspectives.

        A counterargument might be that the existence of (dis)value in the world is inconsistent with the naturalistic third-person ontology of physical science. If eliminativist materialism were the case, this would be so. However, we’re not zombies: first-person facts, not least the existence of (dis)valuable experiences, don’t possess some sort of second-rate ontological status. If we assume Strawsonian physicalism, they are as much a part of the natural world as the rest mass of the electron. I can’t define the normative aspect of disvaluable experience in terms of anything more semantically primitive. But if, for example, you try and hold your hand in ice-cold water for as long as you can, the experience is not motivationally inert. What property of the experience causes you to withdraw your hand?

        Anyhow, this philosophising takes us a long way from the morally urgent question of nonhuman animal suffering. The worst source of severe and readily avoidable misery that exists in the world, today, i.e. factory-farming, is wholly manmade. Unless one is a complete moral nihilist, we have an obligation to stop it.

    • gwern0

       > Nonhuman animals on factory farms are so distressed they have to be
      prevented physically from mutilating themselves via debeaking, tail
      docking, (unanaesthetised) castration, etc.

      But this raises a question. Carl says that we can do this good thing at ‘initially at low marginal cost in other traits.’ Your quote, which is true as far as I know, suggests that farm animals already are causing substantial economic losses due to their suffering/pain: someone or something has to do the debeaking and tail docking and castration, which is going to cost serious money over the billions of animals and probably these procedures are merely reducing the losses from suffering and not eliminating it entirely.

      Given that this completely non-ethical motivation already exists and has existed for decades (none of that being new, again as far as I know), and is particularly salient the more mature the meat industries become and the thinner their margins become, *why haven’t they done this already for purely selfish reasons*?

      Has it simply not occurred to them, ‘hey, maybe we can breed for docility and lack of pain’? That seems a bit implausible given how well and systematic they seem to be in optimizing all the other factors.

      Is it not doable? But I agree with Carl, this sort of breeding seems like it should be perfectly doable.

      Or… does it come with very expensive side-effects, and the debeaking etc represent the optimal tradeoff for them? Is this another case of  Brand’s _Pain: The Gift Nobody Wants_? If so, this is a strong suggestion that this breeding will be very expensive in its consequences since they haven’t done it despite what seem like substantial PR benefits from being more humane, fewer distressed animals, fewer bloody procedures that activists can film (and also remember the savings from not having to do them in the first place), etc…

      • VV

         It seems quite obvious that animal breeders did already select for docility as much as it is possible. That’s pretty much what domestication is about.
        Try to keep a wild buffalo or a wild boar in a pen and see what happens.

      • David Pearce

        Talk of the case for moral clarity can make one sound like a religious conservative – and probably reads jarringly out of place on a blog like overcomingbias. However… 
         
        If we were discussing human child abuse, we’d all give short shift to a self-professed utilitarian who urged breeding happy toddlers who didn’t mind being abused, and in the meantime he’ll carry on as before because he likes the taste etc. A convergence of different indices suggest that a pig is at least as sentient as a human toddler. Yet human do things to pigs that would get us locked up for life if our victims were human. Insofar as we aren’t complete moral nihilists, there are situations when one simply says: This is morally indefensible and should stop. The greatest source of severe, chronic and readily avoidable suffering in the world today is factory farming. Is it really ethically acceptable to wait several decades until selective breeding / in vitro meat or whatever brings our systematic abuse of sentient beings to a close?

      • gwern0

         I don’t see how that is responsive to my point. Are you conceding that this breeding will indeed be very expensive, but this doesn’t matter because of the moral desirability?

    • David Pearce

      Cyan, I was’t making the claim that peripheral nerve ganglia suffer [ "David Pearce: Nociceptors don't suffer -- they send signals"] merely leaving open the possibility that they – and the multiple gangliia of segmented organisms without a central nervous system -undergo micropains. Noxious stimuli trigger the same opioid/dopamine neurotransmitter responses in roundworms and in people. And recall too the writhing tail shed by a lizard that has shed this appendage to distract a predator. This point arose in the context of srdiamonds question of whether it is arbitrary to prioritise unitary subjects of experience.

      The insular cortex and suffering? Alas suffering is probably more evolutionarily ancient. Although the anterior insular cortex seems to be necessary for empathetic pain perception (cf. http://www.ncbi.nlm.nih.gov/pubmed/22961548) intensely unpleasant experiences are possible in the absence of any functioning insular cortex at all (cf. http://www.jneurosci.org/content/29/9/2684.full )

    • Drewfus

      “More fundamentally, in what circumstances is one ethically entitled to harm another sentient being? What ethical weight should we give the plea of “But I like the taste” to the killing, harming or abuse of human and nonhuman animals? Natural science teaches us to aspire to a “view from nowhere”, an impartial God’s-eye-view.”

      Natural science can get us to unconcious cows or in-vitro meat products – then the ethical conflict is made null and void, rather than answered. Science and engineering gives us the means for side-stepping ethical issues, whereas ethics is what we do in lieu of technology. The path from sacred cows to supermarkets full of cheap, quality meat from cows, to ultra-cheap in-vitro meat is as much about overcoming ethical ideas as it is about discovering and following them.

      • David Pearce

        Drewfus,are you arguing that we ought not to use value-judgements? This would at best be paradoxical. If, on the other hand, you are arguing that science offers technical fixes for moral dilemmas, and technology can enable us to lead ethical lifestyles without making any personal sacrifices, then I’d broadly agree with you. But one striking exception, for now at least, is eating meat and animal products. Commercialised gourmet in vitro meat is still 15-20 years(?) away. So too, even on incredibly optimistic timescales, is a world of genetically engineered happy pigs. Rationalisations aside, if one eats meat today, then one is paying for death and suffering on an industrial scale.
         
        I’m not entirely clear what you have in mind by “overcoming ethical ideas”. What would be the point of developing and commercialising cruelty-free alternatives to the horrors of factory-farming if we didn’t recognise, dimly or otherwise, that harming sentient beings for frivolous purposes is morally indefensible?

      • Drewfus

        “Drewfus,are you arguing that we ought not to use value-judgements? This would at best be paradoxical.”

        Yes, possibly not. Why the paradox? What is the source of our presumably improving value-judgments over time, if not the luxury afforded by improved economic circumstances? Or take the case of Alan Turing; why was Turing prosecuted for homosexuality (more or less) in the 1950s, whereas today we leave that alone? We have superior ethics now, right? So what was the source – or did our superior ethics on this matter and others just develop ex nihilo? If you say something about the genetics of homosexuality, i’ll say something about value-judgements being redundant.

        Here is my ‘Ethics in a Nutshell’:

        1. Ethical behavior is the difference between what we do as individuals when completely visible publicly, as compared to when completely anonymous. We restrain our own behavior from acts that harm others directly, to promote our social reputations, and prevent retribution, punishment, rejection or abandonment by others. Ethical behavior is then just our self-interest at work, regarding anything involving social interaction. It is a self-regulating system. ‘Ethics’ is a redundant overlay concept, although it does form the basis of morality-justified hierarchies, most notably organized religion – ethics is about power. By codifying restraints on ‘selfish’ behavior that are already within the bounds of self-interest, ethics self-justifies itself, but from another perspective it is a type of cream-skimming.

        2. Ethics is a code word for the anonymous philosophy of Restrictionism. Books, alchohol, marijuana, revealing clothing/swimwear, contraceptives, free trade, stem cell research, free speech, ‘hate speech’, blasphemy, GM crops, anything deemed ‘unnatural’ – and so on and on, is a target for this philosophy. The favorite phrase of the moralist is “[fill in blank] raises ethical issues”. It means another opportunity for banning something has been spotted. Unmet demand can then be exploited politically. Ethics claims to be about the search for ‘the good’ – i don’t except this face-value definition, and the clash between Restrictionism and Capitalism is clear enough.

        3. Ethics is about signalling wealth – even if you don’t have any. Hands up all the poor people who want to ban the supply of existing meat and other animal products. Now hands up all the rich people for the same question. See what i mean? But of course some poor people did put their hands up – it is not a costly signal, but the perception of adherence to ethical principle makes it seem plausibly genuine – to some.

        4. Ethics is about discovering that moral standards once deemed critical to society are unnecessary overhead, and then changing tune to stay relevant. Alan Turing and the later change of heart on homosexuality is the perfect example, and ethics is more pragmatism than principle.

        5. Ethics is impotent. The attitudes of powerful men of the Enlightenment era, like Governor Arthur Phillip, meant that the colony of Australia never allowed slavery – making Australia even more a country of the Enlightment than the United States, in one regard. It is nicely depicted, with it aftermath, in http://www.youtube.com/watch?v=aIHxebgM-5I (in range 35:00-43:05). So what does such a civilized policy tell us about the consequences for the welfare of the Aboriginal Australian since that time, or a contemporary comparison with African Americans, whose predecessors had suffered as slaves? Probably nothing. In the long-run at least, it appears to have made no difference.

      • David Pearce

        Drewfus, the paradox – to put it euphemistically – in arguing that “we ought not to use value judgements” is that this plea itself expresses a value-judgement. We can’t have it both ways.  Your account of ethics deserves a fuller treatment than I can offer here. So for now just a couple of observations.   First, advocates of “ahimsa”, and a cruelty-free vegan lifestyle, aren’t predominantly the rich. Rather they are mainly some of the poorest people on the planet. Sophisticated Westerners write dismissively of “sacred cows”; but many Indians are less prone to arbitrary anthropocentric bias than Western species supremacists.  Secondly, most if not all of the practices we now find morally offensive (e.g. human sacrifice, genocide, persecution of homosexuals as an abomination in the eyes of the Lord, etc) presuppose beliefs – and indeed entire conceptual frameworks – that are false. Much more controversially, I’d argue that we may cheat Hume’s Guillotine with the following argument. In a nutshell, my agony has a primitive, irreducible normative aspect. This irreducible normative aspect might seem to have no bearing on the equivalent agony of other sentient beings: why should I care? But any such indifference on my part is a mere delusion of perspective, akin to the genetic fitness-enhancing perception that I’m the centre of the universe. Science teaches us that there are no ontologically privileged here-and-nows. So inasmuch as my agony is bad for me, then equivalent agony is bad for anyone, anywhere. However, a lot more needs to be said here to make any kind of value-realism work… 

      • http://juridicalcoherence.blogspot.com/ srdiamond

        I’d argue that we may cheat Hume’s Guillotine with the following argument. In a nutshell, my agony has a primitive, irreducible normative aspect.

        That’s exactly Moore’s answer to his (often wrongly derided) “open question.” (See my “Habit theory of morality: moral judgments are always false” — http://tinyurl.com/7dcbt7y )

        The question is, What would prompt anyone to believe that there’s such a thing as an irreducibly normative aspect?–except as an ad hoc move to save belief in objective morality/moral realism?

        The whole direction of science is, as you say, toward an archimidean perspective, but it is also (or is it part of the same tendency?) away from infusing nature with norms.

      • Drewfus

        “the paradox – to put it euphemistically – in arguing that “we ought not to use value judgements” is that this plea itself expresses a value-judgement. We can’t have it both ways.”

        Is it for practical reasons we can’t have it both ways, or because it is not acceptable to Western intellectual standards? Why can’t i accept the paradox and live with it, if i’m otherwise happy with the not making value-judgements rule? I’m interested both regardless of and in the context of your reference to “arbitrary anthropocentric bias [of] Western species supremacists”. I guess a cynical comment regarding a non-Western ethic must end my credibility, no matter how critical i am of Western ethics in general. I can’t win and you can’t lose. Presumably you’re not missing the irony of how Western your political correctness is.

        One thing interesting to me (and i am more interested in the interesting than in judgement), is the freedom of the ethicist to bind his/her evaluation of the good to prescriptions or instructions for ethical behavior, as opposed to outcomes of behavior. Ethics seems to me to be the conceptual opposite of the feedback mechanisms of a plant – we make judgments in advance. Ethics is feedforward, and most likely highly integrated with our conceptual ability to see into the future, to predict the future, and the ability to delay present gratification for the benefit of our future selves. Are cultures with a more precise concept of time also more ethical as a result? Is our capacity for waiting a strong proxy for our ability for prediction which in turn is built on our capacity for visualization? If yes, then what improves our visualization skills? Stories (by improving imagination)? Clocks (by improving our sense of time)? Mirrors (by helping us to build our self-image)?

        I see the development of ethics as closely tracking our ability to predict the social outcomes of shared or partially shared behaviours, albiet hidden behind elaborate theories, required to make the whole endeavour acceptable to the Western mind (and possibly other types). If this prescriptive view of ethics is incorrect, then what justifies the seemingly arbitary binding of the good with behavioral advice and pre-judgement, rather than a trial and error determination of the good based on measuring behavioral outcomes? Do the domains of human activity that ethics seeks to instruct enjoy a similar freedom, or otherwise does this hint at the very point, namely that its power over us is mostly due to the lack of constraints we place on it – a possibility not available in other domains where some sense of separation of powers is in effect (voters-government, consumers-producers)?

        Culture possibly uses this rebinding technique more generally. Reports from the carers of wild or feral children (those raised in part by non-human species) or children raised in extreme isolation (ex: Genie Wiley), indicate an unusual response to conditions almost all of us would perceive as very cold, such as frolicking naked in the snow, or getting into a cold water tap bath on a cold day. The sensation of cold is apparetly absent in these children in these cases, indicating that our sense of cold is partially culturally determined. Perhaps our default biological response is to feel cold when our core body temp falls significantly below normal, but affluence results in a shift from body temp to rate of heat loss, or from present temp to predicted temp, justified by the internalized knowledge that the rebinding of our concept of cold from a sensory to a more abstract basis results in the preferred outcome – we put on more clothes or increase the room temperature. A big stretch to be sure but i see parallels between this and the ‘technology of ethics’.

      • Drewfus

        “Perhaps our default biological response is to feel cold when our core body temp falls significantly below normal, but affluence results in a shift from body temp to rate of heat loss, or from present temp to predicted temp…”

        An equivalent mechanism for hunger would entail a shift in our internal (functional) concept from actual hunger (low nutrient levels, espec. blood suger) to predicted hunger, which might compare current energy consumption to unprocessed food levels. The shift works because calories are easy enough to obtain, so we feel hungry in advance of a calorie deficit, not when this actually occurs.

        So widespread overeating might be due to a culture-wide phisiological shift from hunger-as-actual to hunger-as-predictive. I’ve actually experienced this shift myself – albiet from predictive to actual, not actual to predictive. Drawing inspiration from one of Ray Kurzweil’s books, i semi-starved myself for a few days until sugar cravings had been conquered. My weight then gradually declined until i was quite thin (maybe too thin). What i noticed in this period was my perception of hunger changed from a craving or any sort of ravenousness, to a slight light-headedness and wobbly knees. This meant i really was hungry, in the “old-fashioned” sense, and that i should eat for safety and cognition reasons more than as a way of elliminating the hunger sensation.

  • Muga Sofer

    Funnily enough “meat substitutes” already work.

  • Muga Sofer

    [double-post]

  • Muga Sofer

    I’m pretty sure that, while coherent, the notion of human-friendly AI is not particularly distinct from animal-friendly AI, since our morality values the happiness of all minds AFAICT.

    • Pablo

      Muga, it is clear that human morality, even if coherently extrapolated, does not value the happiness of all minds equally.  Yudkowsky himself has stated explicitly that he does not care about frog pain.

      More importantly, a metaethical theory that constructs values as what an arbitrary class of beings (“we”) value seems to rest on a hopeless foundation.

      • JonatasM

        Agreed. However one might foresee that, according to the provision of that theory of extrapolating values with limitless intelligence, wrong values would be discarded, then the whole point of the theory vanishes as it would be reduced to standard utilitarianism or my formulation thereof. Despite being an exercise in futility, Coherent Extrapolated Volition may serve to appease criticisms from ignorance, but it also seems quite useless, as it is not an ethical theory, but a recipe to find out what an ethical theory is.

      • VV

         Is Coherent Extrapolated Volition even well-defined?

        AFAIK, it is supposed to be a meta-ethical theory to aggregate the moral values of different moral agents, but last time I’ve asked it actually does that, they (Sotala) answered “That’s an open question”

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Muga, it is clear that human morality, even if coherently extrapolated, does not value the happiness of all minds equally.  Yudkowsky himself has stated explicitly that he does not care about frog pain

        What makes Yudkowsky an authority on the coherent extrapolation of human morality (whatever that’s supposed to mean)?

        I gather that some supposedly “advanced thinkers” actually believe there’s some definite universal human morality that merely needs to be discovered (and “extrapolated”).

        Our “moral” principles (or principles of integrity, as I call them) are tools, not truths. ( http://tinyurl.com/7dcbt7y ) Now, it’s not possible to say in advance with confidence which principles of integrity will help a given person maintain a sense of integrity, and it’s possible to imagine that some persons will come to include frogs in their domain. I could imagine, perhaps, that a veterinarian would find such an all-encompassing welfare principle useful. But in most cases, it is not credibly useful for purposes of useful self-control. Hence, it is likely to serve hypocritical signaling functions alone.

      • Pablo

        I mentioned Yudkowsky’s dismissive attitude towards the pain of certain sentient beings as a way of illustrating my point that human morality does not value the happiness of all minds, contrary to what Muga claimed.  The counterexample was particularly appropriate since it was provided in the context of a discussion of Yudkowsky’s “friendly AI” and, by implication, coherent extrapolated volition.  I didn’t mention Yudkowsky as an “authority on the coherent extrapolation of human morality,” as you misleadingly imply I did.

        You seem to believe that drawing a distinction between moral principles as truths and moral principles as “tools” is important for the problem I raised.  This is not so.  Even if you regard moral principles as “tools”, you need to explain why you define the function of such tools in terms of the interests, or “integrity”, of human beings, rather than the larger community of sentient beings.  I couldn’t find any such explanation in your comment or in the blog post that you cited.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Even if you regard moral principles as “tools”, you need to explain why you define the function of such tools in terms of the interests, or “integrity”, of human beings, rather than the larger community of sentient beings.  I couldn’t find any such explanation in your comment or in the blog post that you cited.

        “Function” is biological and results from biological evolution. The function of morality isn’t the interests of the community of sentient being because that community lacks a biological interest. There’s no species selection, certainly no “sentient being” selection. I discuss this further in “What’s morality for?—Integrity versus conformity” — http://tinyurl.com/b4y2a8p

        But if I can guess, what you want is a justification for using morality as a tool for particular ends. There is none. That’s why there’s really no morality: only principles of integrity. That’s the psychological purpose I theorize they serve; it’s in no way a metaphysical demonstration. I say this is what principles of integrity really help with. Other uses will harm your self interest. (See “Morality and free will are hazardous to your mental health” http://tinyurl.com/a9jpgqk ) But I don’t accord this “advice” inherent normativity.

      • Muga Sofer

        Why on earth should we try to follow evolution’s “functions” for things?!

        Relevant link:http://lesswrong.com/lw/l0/adaptationexecuters_not_fitnessmaximizers/

      • Stephen Diamond

        If you understand the biological function, it often can help you choose whether to respect it. If a trait serves a vital biological function that can’t be otherwise served, knowing the biological function is decisive.

        My claim is that “moral” principles of integrity serve a vital function for personal adaptation—which moral realism subverts— http://tinyurl.com/7dcbt7y

      • Muga Sofer

        Remember, morality IS OUR GOALS. If extrapolated morality is maladaptive – and it probably is, to an extent – that doesn’t mean it serves OUR goals to become more adaptive. Indeed, it almost by definition does not.

        Sorry about the all-caps, I don’t know how to do formatting here.

      • Muga Sofer

        That assumes that the utility we would gain from better evolutionary adaptiveness outweighs the disutility of ignoring how our desires extrapolate, which seems highly unlikely.

      • Muga Sofer

        I’m going to go ahead and say Yudkowsky is wrong there. People care less about flies having their legs pulled off than about foxes getting tortured than about chimps than about humans, but they do care AFAICT.

      • sclamons

        Not enough to stop them from slaughtering them and eating them. Farmers have been raising livestock and killing them for food for millennia — that doesn’t smell like evidence of people caring about animal welfare.

      • Muga Sofer

        And people kept slaves for millennia – proof that people don’t care about humans?

  • Oscar Horta

    I’m not very optimistic with regards to in-vitro meat. And in fact, even if successfully developed, it may avoid much suffering and many deaths, but it wouldn’t do much to question speciesism. 
    But the alternative of breeding happier animals seems to me implausible and, even if successful, quite unethical. I claim this for the following reasons.

    First, even if feasible, given the way the meat industry work it’s just unbelievable that they would try to achieve this (while there is, in contrast, some research done in in-vitro meat, and much efforts carried out to become more people vegan — with success, since the number of vegans grows every year). 

    Second, the conditions in factory farms, transport, etc. are such that the claim that an animal could ever be happy in them seems far more implausible not only than the chance that in-vitro meat is developed, but also that the chance that we turn a huge part of the population vegan before that may happen. I believe the latter is quite implausible in the foreseeable future: this shows how implausible I think that the proposal of making factory farmed animals happy is. Of course, a way in which animals may not suffer is if they were unconscious, but no matter how much you select nonhuman animals, it’s impossible to select unconscious animals you can breed and kill in farms (maybe in labs you could, but then maybe we should go directly for in-vitro). 

    Third, if you make animals in factory farms happy, then you would harm them when you kill them. This is the reason why it’s unethical to eat animals who have been happy –such as some free-ranged ones– (together with the fact that they are killed in pain in slaughterhouses).

  • Douglas Knight

    This is tangential, but the units in the quote about milk production are wrong. Those are yields per year, not per lifetime. This is clear in the second citation, or by considering scale. I cannot find my source, but somewhere I obtained the belief that lifetime milk production has not changed much in the past century.

    • Carl Shulman

      “the belief that lifetime milk production has not changed much in the past century.”

      Meaning the cows are slaughtered much younger now? One could check that by looking at slaughter ages then and now.

  • dmytryl

    I did discuss this on some other forum a month ago or so… my proposal was to simply breed for small brain size, the cows can probably survive with almost no brain (the ones that spend all their life in a pen especially), an ethical equivalent of a control computer for the meat in a vat. It may also raise the efficiency.

  • guest

    The article assumes it is possible to quantitatively state an animal’s quality of life FROM THE VIEWPOINT OF THE ANIMAL, but this can only go as far as from the viewpoint of humans regarding animals as a foodstuff?

    Animals haven’t specifically evolved as foodstuff, therefore being bred to fulfil that role for humans is very likely not in their interest – no matter how much ‘pleasure-drug’ artificial or otherwise we may inject into them.I’m reminded of the animal bred as foodstuff in ‘The Restaurant at the End of the Universe’ by Douglas Adams: the only way to get the animal’s point of view is to get it to clearly articulate it, but the by product of that would be intelligence.  So this article’s genetic breeding approach would ultimately appear to be a conundrum.Also a factor (ignored by the article?) is epigenetics, which although currently quite mysterious is nevertheless proving significant with regards to traits exhibited by offspring even generations beyond the subject animal.But apart from this, the relatively hit and miss process of breeding almost guarantees intermediate forms towards any aim suffer any manner of unbalanced effects, not to mention that even our current livestock suffer many ill-effects from human intervention in their breeding and it does seems naive to suggest animal welfare will be prioritised above profit.Going back to pleasure, if it manifests via reward mechanisms and chemicals in the animal’s brain because they are contrived to engage in behaviour that promotes passing on their genes or a delusion of same… even so, drugs are involved and adaptation to this drug leads to eroding potency of effect that surely a threshold will be reached resulting in overdose with ill-effect – another dead end.

    • http://entitledtoanopinion.wordpress.com TGGP

      The people who began breeding animals didn’t know anything about epigenitics, yet they were phenomenally successful with regard to their purposes (as Carl discussed in his post). I guess the term, like “mirror neurons”, has unfortunately spread far enough in popular culture that lots of people use it without really adding anything to the discussion.

  • real deal organic

    I’ll stick with hunting. It’s the real organic. The rest of you can have your petri dish “meat.” 1 weekend, 1 shot, 6 months worth of real meat. Pretty efficient.
     
    What is more humane than placing the bullet straight through the heart? Never saw it coming. Painless. The deer lived a nice life in the woods, not a CAFO. Oh yea, hunting is low status. Not something academics find sexy. At least the meat is real.

    • Muga Sofer

      >What is more humane than placing the bullet straight through the heart? Never saw it coming. Painless. The deer lived a nice life in the woods, not a CAFO

      Yup, hunting is totally humane. Same reason we don’t prosecute serial killers, amirite?

  • TruePath

    I’ve always felt we should drug livestock to improve their lives.  We can easily and cheaply keep them woozy on morphine derivitives (carfentanyl and the like can be diluted crazily just to reach normal dosage promising low cost and the short lifespan of livestock suggests we could keep them high pretty much the whole time)

  • jhertzli

    I’m reminded of the Dish of the Day in the Hitchhiker’s Guide to the Galaxy series, an animal that wanted to be eaten and would say so clearly. When it slaughtered itself, it was careful to be very humane.

  • VV

    Wireheading for animals!

    few points:

    1) All domestic animals have already been selected for docility. Of course, that was done to optimize ease of handling, not for the happiness of the animals, but the two things are correlated. Surely you don’t want to deal with angry cows, do you?

    2) Selection for multiple traits doesn’t generally come at a small marginal cost. The more traits you select for, the slower and less predictable the selection process becomes, even exponentially so. Some traits that have positive value for you may be linked with negative traits, forcing tradeoffs.

    3) The main motivation for in vitro meat is not animal welfare, it is economic sustainability. Livestock are an extremely inefficient way of converting vegetable nutrients to edible animal products. Obviously, the most energy efficient solution would be to become all vegans, but since there is a demand for meat, such products could have a market, and possibly could even become cheaper than real meat.

    • gaverick

      I agree with VV’s point 2. About 10 years ago, I had a series of interviews with two of the largest broiler breeder companies, Hubbard and Aviagen, about why they don’t emphasize welfare-related traits in their breeding programs. Their response was that the opportunity cost was too high — they can see gains in only a few traits per generation (breeding intensity), and if they select on welfare, they can’t select as strongly on growth rates and conversion efficiency. Even traits that industry acknowledges are costly, such as ascites and tibial dyschondroplasia, are accepted because the cost of breeding them out is too high.There are alternative breeds, such as ISA 657 used for Label Rouge, that are selected in part for welfare-related traits — but they grow more slowly than conventional breeds, are less efficient, are more expensive, and are popular only in niche markets.  

      • Carl Shulman

        Thanks for this comment, Gaverick. I agree that opportunity costs explain why the welfare “low-hanging fruit” haven’t been plucked by industry.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Which rebuts a key claim of yours: that the marginal cost is low. It only seemed low to you because you failed to consider opportunity costs–your essay didn’t even mention the problems with selecting for multiple traits, did it?

      • Carl Shulman

        Srdiamond,

        Note the discussion of altering “net merit” equations. Net merit is measured in dollars, and putting welfare weight into a net merit equation entails accepting reductions in the dollar value for sale of resulting offspring in exchange for welfare. The opportunity cost is reflected in the reduced weight for selection of other things. I said that diverting selective power from the economically valuable traits could boost welfare, and the fact that there hasn’t been intensive breeding for well-being as such means you get more change for a given adjustment to the merit formula than for a more intensely selected trait, giving a relatively low marginal cost in weighting per trait change.

      • http://juridicalcoherence.blogspot.com/ srdiamond

        Carl Shulman,

        Point taken.

    • Carl Shulman

      “Selection for multiple traits doesn’t generally come at a small marginal cost. The more traits you select for, the slower and less predictable the selection process becomes, even exponentially so.”

      Right, there are opportunity costs as we approach the target of selection, I was talking about marginal direct costs of the changes.

      “Some traits that have positive value for you may be linked with negative traits, forcing tradeoffs.”

      Yes. However, since there hasn’t been much selection for welfare, there will be relatively more “low-hanging fruit” of welfare-boosting changes without significant negative side-effects.

      “in vitro meat”

      I agree that in vitro meat is great.

    • sclamons

      I’d say the “main motivation for in vitro meat” depends a lot on whether or not you’re already vegetarian.

  • Drewfus

    Rather than breeding cows for happiness per se, a better goal might be to breed for unconciousness.

    I recently had a brief but pleasant experience with unconciousness in a medical surgery. Irrelevant to my point, but the minor op was to remove a small but irritating growth that had started protruding from my right hand little finger (don’t remember the scientific name). After the local anesthetic, and feeling a bit hot and sweaty, i could hear the tool cutting into this thing on my finger (from the pitch change), but not feeling anything was reasuring. That was the last thing i can remember, before passing out for perhaps a minute.

    This was definately fainting, because according to the nurse my face was exactly the same color as the pale white walls of the surgery. Unconcious yes, but i did start having a very lucid dream, and more or less a very pleasant one. I was quite happy in my dream world, but the nurse apparently decided that being passed out was not desirable, so shook and yelled a little to wake me. Being ‘woken’ from unconcious dreaming is not fun. All of the context that fairly rapidly returns to conciousness when awakening from sleep was missing. I had no sense of where i was, how i got to be where i was, or who the people above me were. This induced a brief but real panic – i think i was actually held down for a few seconds. There was a fleeting sense of having been abducted. Combine this with unstable visual perception (the room and its contents appeared to swirl around), and i was not grateful to be concious again.

    Thinking about this later, it seemed to me that my brain had in some sense made an executive decision to turn off conciousness for a period. This was appropriate given the context. There was nothing to be gained by staying concious, and nothing to be lost be disengaging it. On the contrary, the welfare of the brain was going to be much improved in an unconcious state.

    In the wild, animals need to be concious when obtaining food and water, when dealing with territorial and natural threats, when engaing in reproduction activities and for rearing of offspring. Other than consuming what we put in front of them, none of this is relevant for livestock – then neither is conciousness. Domesticated animals that are raised for human consumption have little or no need to be concious, but much to lose in perceiving the unpleasantness of their surroundings and situation. Permanent unconciousness might be better for both them and us.

  • Pingback: Saturday Linkage » Duck of Minerva

  • johnz1

     

    I have
    seen an article on phdguy website http://www.phdguy.com about future of web and
    death of google.

  • johnz1

     

    I have
    seen an article on phdguy website http://www.phdguy.com about future of web and
    death of google.

  • David Pearce

    srdiamond, I certainly wouldn’t dismiss Moore’s Open Question argument. Indeed, on the face of it, the argument is decisive. Whatever apparently dreadful or wonderful phenomenon-exists in the natural world, one can still ask if it’s (dis)valuable. However, what is not an open question, at least for me, is whether my unbearable distress is disvaluable for me. And I’m arguing that it’s only an epistemological limitation on my part, not some deep ontological truth about the world, that leads to any failure in my recognition that ( it is objectively the case that) your unbearable distress is disvaluable too. The badness of your agony is not an open question to a mirror-touch synaesthete – or a God-like superintelligence who could apprehend all possible first-person perspectives.

    A counterargument might be that the existence of (dis)value in the world is inconsistent with the naturalistic third-person ontology of physical science. If eliminativist materialism were the case, this would be so. However, we’re not zombies: first-person facts, not least the existence of (dis)valuable experiences, don’t possess some sort of second-rate ontological status. If we assume Strawsonian physicalism, they are as much a part of the natural world as the rest mass of the electron. I can’t define the normative aspect of disvaluable experience in terms of anything more semantically primitive. But if, for example, you try and hold your hand in ice-cold water for as long as you can, the experience is not motivationally inert. What property of the experience causes you to withdraw your hand?

    Anyhow, this philosophising takes us a long way from the morally urgent question of nonhuman animal suffering. The worst source of severe and readily avoidable misery that exists in the world, today, i.e. factory-farming, is wholly manmade. Unless one is a complete moral nihilist, we have an obligation to stop it.

  • Pingback: Incentivizing happier livestock by fixing farm market failures | ForeXiv

  • Pingback: Happier livestock through genetic bundling | foreXiv