Tag Archives: Philosophy

Testing Moral Progress

Mike Huemer just published his version of the familiar argument that changing moral views is evidence for moral realism. Here is the progress datum he seeks to explain:

Mainstream illiberal views of earlier centuries are shocking and absurd to modern readers. The trend is consistent across many issues: war, murder, slavery, democracy, women’s suffrage, racial segregation, torture, execution, colonization. It is difficult to think of any issue on which attitudes have moved in the other direction. This trend has been ongoing for millennia, accelerating in the last two centuries, and even the last 50 years, and it affects virtually every country on Earth. … All the changes are consistent with a certain coherent ethical standpoint. Furthermore, the change has been proceeding in the same direction for centuries, and the changes have affected nearly all societies across the globe. This is not a random walk.

Huemer’s favored explanation:

If there are objective ethical truths to which human beings have some epistemic access, then we should expect moral beliefs across societies to converge over time, if only very slowly.

But note three other implications of this moral-learning process, at least if we assume the usual (e.g., Bayesian) rational belief framework:

  1. The rate at which moral beliefs have been changing should track the rate at which we get relevant info, such as via life experience or careful thought. If we’ve seen a lot more change recently than thousands of years ago, we need a reason to think we’ve had a lot more thinking or experience lately.
  2. If people are at least crudely aware of the moral beliefs of others in the world, then they should be learning from each other much more than from their personal thoughts and experience. Thus moral learning should be a worldwide phenomena; it might explain average world moral beliefs, but it can’t explain much of belief differences at a time.
  3. Rational learning of any expected value via a stream of info should produce a random walk in those expectations, not a steady trend. But as Huemer notes, what we mostly see lately are steady trends.

For Age of Em, I read a lot about cultural value variation, and related factor analyses. One of the two main factors by which national values vary correlates strongly with average national wealth. At each point in time, richer nations have more of this factor, over time nations get more of it as they get richer, and when a nation has an unusual jump in wealth it gets an unusual jump in this factor. And this favor explains an awful lot of the value choices Huemer seeks to explain. All this even though people within a nation that have these values more are not richer on average.

The usual view in this field is that the direction of causation here is mostly from wealth to this value factor. This makes sense because this is the usual situation for variables that correlate with wealth. For example, if length of roads or number of TVs correlate with wealth, that is much more because wealth causes roads and TVs, and much less because roads and TV cause wealth. Since wealth is the main “power” factor of a society, this main factor tends to cause other small things more than they cause it.

This is as close as Hummer gets to addressing this usual view:

Perhaps there is a gene that inclines one toward illiberal beliefs if one’s society as a whole is primitive and poor, but inclines one toward liberal beliefs if one’s society is advanced and prosperous. Again, it is unclear why such a gene would be especially advantageous, as compared with a gene that causes one to be liberal in all conditions, or illiberal in all conditions. Even if such a gene would be advantageous, there has not been sufficient opportunity for it to be selected, since for almost all of the history of the species, human beings have lived in poor, primitive societies.

Well if you insist on explaining things in terms of genes, everything is “unclear”; we just don’t have good full explanations to take us all the way from genes to how values vary with cultural context. I’ve suggested that we industry folks are reverting to forager values in many ways with increasing wealth, because wealth cuts the fear that made foragers into farmers. But you don’t have to buy my story to find it plausible that humans are just built so that their values vary as their society gets rich. (This change need not at all be adaptive in today’s environment.)

Note that we already see many variables that change between rich vs. poor societies, but which don’t change the same way between rich and poor people within a society. For example rich people in a society save more, but rich societies don’t save more. Richer societies spend a larger fraction of income on medicine, but richer people spend a smaller fraction. And rich societies have much lower fertility even when rich people have about the same fertility.

Also not that “convergence” is about variance of opinion; it isn’t obvious to me that variance is lower now than it was thousands of years. What we’ve seen is change, not convergence.

Bottom line: the usual social science story that increasing wealth causes certain predictable value changes fits the value variation data a lot better than the theory that the world is slowly learning moral truth. Even if we accepted moral learning as explaining some of the variation, we’ll need wealth causes values to explain a lot of the rest of the variation. So why not let it explain all? Maybe someone can come up with variations on the moral learning theory that fit the data better. But at the moment, the choice isn’t even close.

GD Star Rating
Tagged as: , ,

Philosophy Between The Lines

Seven years ago I raved about a Journal of Politics article by Arthur Melzer that persuaded me that ancient thinkers often wrote “esoterically,” e.g., praising their local religions and rulers on the surface, while expressing their true atheism, rebellion, etc. between the lines. Melzer has just come out with a very well written and persuasive book Philosophy Between The Lines, that greatly elaborates this thesis.

Melzer’s book emphasizes the puzzle that while ancient thinkers were quite open about esotericism, modern thinkers have mostly forgotten it ever existed, and are typically indignantly dismissive when the idea is suggested. Below the fold I give an extended quote on a fascinating transition period in the late 1700s when European intellectuals openly debated how esoteric to be.

While Melzer’s last chapter is on implications of esotericism, he really only talks about how it can somewhat undercut cultural relativism, if we can see intellectuals from different times and places as actually agreeing more on God, politics, etc. Yet he doesn’t mention the most obvious implication, at least to an economist: since esotericism raises the price of reading the ancients, we will likely want to buy less of this product, and pay less attention to what the ancients said. Melzer also doesn’t mention the implications that the rise of direct speech might be in important enabler of the industrial revolution, or that seeing more past esotericism should lead us to expect to find more of it around us today, even if we now officially disapprove of it.

Melzer says that the main point of his book is just to convince us that esotericism actually happened, not that it was good or bad, nor any particular claim about what any particular ancient really meant. But this stance is undermined by the fact that the main bulk of the book focuses on elaborating four good reasons why the ancients might have been esoteric. In contrast, when Melzer talks about why we moderns dislike esotericism, and why esotericism is the usual practice around the world today in non-Western cultures, he mentions many illicit reasons why writers might be esoteric. For example, Melzer quotes An Anthropology of Indirect Communication giving these reasons for such talk:

To avoid giving offence, or, on the contrary, to give offence but with relative impunity; to mitigate embarrassment and save face; to entertain through the manipulation of disguise; for aesthetic pleasure; to maintain harmonious and social relations; to establish relative social status; to exclude from a discourse those not familiar with the conventions of its usage and thereby to strengthen the solidarity of those who are.

But when Melzer talks about why the famous long-revered ancient thinkers might have been esoteric, he gives only reasons that such ancients would have seen as noble: protecting thinkers from society, protecting society from thinkers, teaching students, and promoting social reform.

Now whether the ancients were esoteric for good or bad reasons isn’t very relevant to the empirical claim that they were in fact esoteric, which Melzer says is his main focus. So then why does Melzer focus on if the ancients were esoteric for good reasons? One possible answer is that Melzer actually wants us to like and respect esotericism, not just believe that it existed. Another possible answer is that Melzer sees his readers as biased to see ancient thinkers as good people. If many folks have invested so much in identifying with famous ancient thinkers that they will not accept a claim about those ancients that suggests they were bad people, then to convince such folks of his claim Melzer needs to show that that his claim is quite compatible with those ancients being good people.

Either way, however, Melzer does quite successfully show that the ancients were often and openly esoteric. That promised quote on late 1700s European intellectuals:

Continue reading "Philosophy Between The Lines" »

GD Star Rating
Tagged as: , ,

Morality as though it really mattered

A large share of the public, and even an outright majority of professional philosophers, claim to be ‘moral realists‘. Presumably, if this means anything, it means that there are objective rules out there that any being ought to follow and doing the ‘right thing’ is about more than just doing what you want.

Whatever surveys say, my impression is that almost nobody acts as though they were actually realists. If you really believed that there were objective rules that we should follow, that would make it crucial to work out what those rules actually were. If you failed to pick the right rules, you could spend your life doing things that were worthless, or maybe even evil. And if those are the rules that everyone necessarily ought to be following, nothing could be worse than failing to follow them. If most acts or consequences are not the best, as seems likely, then the chances of you stumbling on the right ones by chance are very low.

Does this imply that you should spend your entire life studying morality? Not exactly. If you became sufficiently confident about what was good, it would then be more valuable to go out and do that thing, rather than continue studying. On the other hand, it does imply a lot more effort than most people put into this question today. The number of ethicists with a public profile could be counted on one hand. Research on ethics, let alone meta-ethics, is largely ignored by the public and considered of ‘academic interest’, if that. To a realist, nothing could be further from the truth. It is impossible to go about forming other life plans confidently until you have worked out what is morally right!

Simple probing using questions well known to philosophers usually reveals a great deal of apparent inconsistency in people’s positions on moral issues. This has been known for thousands of years, but we are scarcely more consistent now than in the past. If we assume that any of the rules we ought to follow will be consistent with one another, this is a disaster and calls for us to down tools until right and wrong can be clarified. In other cases, popular intutive positions simply do not make sense.

A moral realist should also be trying to spread their bets to account for ‘moral uncertainty‘. Even if you think you have the right moral code, there is always the possibility you are mistaken and in fact a different set of rules are correct. Unless you are extremely confident that the rules you consider most likely, this ought to affect your behaviour. This is easily explained through an example which occurred to me recently concerning the debate over the ‘person-affecting view‘ of morality. According to this view, it would only be good to prevent a catastrophe that caused the extinction of humanity because such a catastrophe would affect people alive now, not because it ensures countless future generations never get to live. People who could exist in the future but don’t are not well-defined, and so do not quality for moral consideration. The case for putting enormous resources into ensuring humanity does not collapse is weaker if future people do not count. But how much weaker? Let’s say the number of (post-)humans we expect to live in the future, in the absence of any collapse, is a modest 1 trillion. The real number is probably much larger. If you thought there were just a 10% chance that people who weren’t alive now did in fact deserve moral consideration, that would still mean collapse prevented the existence of 100 billion future people in ‘expected value’ terms. This still dwarfs the importance of the 7 billion people alive today, and makes the case for focussing on such threats many times more compelling than otherwise. Note that incorporating moral uncertainty is unlikely to make someone stop focussing on collapse risk, because the consequences of being wrong in the other direction aren’t so bad.

This demonstrates that a moral realist with some doubt they have picked the right rules will want to a) hedge their bets b) focus disproportionate attention on plausible rules under which their choices have a bigger potential impact on the desirability of outcomes. This is just the same as uncertainty around matters of fact: we take precautions in case our model of how the world works is wrong, especially those errors under which our preferred choice could lead to a relative disaster. Despite this being a natural and important consideration for all moral realists, moral uncertainty is only talked about by a handful of moral philosophers.

Uncertainty about moral issues is scarcely a fringe concern because the quality of available evidence is so poor. Most moral reasoning, when we dig down, relies on nothing more than the competing intuitions of different people. The vast majority of people I know think the moral intuitions of the billions of people who lived in the past on matters such as racism, gender, sex, torture, slavery, the divine right of monarchs, animal cruelty and so on, were totally wrong. Furthermore, intuitive disagreement on moral questions remains vast today. Without a compelling reason to think our intuitions are better than those of others – and I don’t see one – the chances that we have all the right intuitions is frighteningly low.

I would go further and say there is no obvious reason for our moral intuitions to be tethered to what is really right and wrong full stop. It is almost certain that humans came about through the process of evolution. Evolution will give us the ability to sense the physical world in order to be able to respond to it, survive and reproduce. It will also give us good intuitions about mathematics, insofar as that helps us make predictions about the world around us, survive and reproduce. But why should natural selection provide us with instinctive knowledge of objective moral rules? There is no necessary reason for such knowledge to help a creature survive – indeed, most popular moral theories are likely to do the opposite. For this reason our intuitions, even where they agree, are probably uninformative.

I think this shows that most people who profess moral realism are in fact not. This is yet another obvious example of human hypocrisy. Professing objective morality is instrumentally useful for individuals and societies, and our minds can be easily shielded from what this implies. For anyone who actually does want to follow through on a realist position, I can see two options,

  • Hit the books and put more work into doing the right thing.
  • Concede that you have almost no chance of working out what is right and wrong, and could not gain much by trying. Moral skepticism would get you off the hook.

Personally, I would like to think I take doing the right thing seriously, so I am willing to offer a monetary prize of £300 for anyone who can change my mind on a) whether I ought to place a significant probability on moral realism being correct, or b) help me see that I seriously misunderstand what I subjectively value. Such insights would be a bargain!

GD Star Rating
Tagged as: , , ,

Future Filter Fatalism

One of the more colorful vignettes in philosophy is Gibbard and Harper’s “Death in Damascus” case:

Consider the story of the man who met Death in Damascus. Death looked surprised, but then recovered his ghastly composure and said, ‘I am coming for you tomorrow’. The terrified man that night bought a camel and rode to Aleppo. The next day, Death knocked on the door of the room where he was hiding, and said I have come for you’.

‘But I thought you would be looking for me in Damascus’, said the man.

‘Not at all’, said Death ‘that is why I was surprised to see you yesterday. I knew that today I was to find you in Aleppo’.

That is, Death’s foresight takes into account any reactions to Death’s activities.

Now suppose you think that a large portion of the Great Filter lies ahead, so that almost all civilizations like ours fail to colonize the stars. This implies that civilizations almost never adopt strategies that effectively avert doom and allow colonization. Thus the mere fact that we adopt any purported Filter-avoiding strategy S is strong evidence that S won’t work, just as the fact that you adopt any particular plan to escape Death indicates that it will fail.

To expect S to work we would have to be very confident that we were highly unusual in adopting S (or any strategy as good as S), in addition to thinking S very good on the merits. This burden might be met if it was only through some bizarre fluke that S became possible, and a strategy might improve our chances even though we would remain almost certain to fail, but common features, such as awareness of the Great Filter, would not suffice to avoid future filters.

GD Star Rating
Tagged as: , ,

Paternalism can be kind, just not to present-you

You may want to file this under ‘incredibly obvious’, but I haven’t seen it noted elsewhere.

Liberals and libertarians have an instinctive aversion to paternalism. Their key objection is: how can anyone else be expected to know what is good for you, better than you do?

This is usually true, but it neglects a coherent justification for many paternalistic policies that doesn’t require that anyone know more than you. The paternalist could be fine with their policy being bad for ‘present-you’ if it benefits ‘future-you’ even more. But don’t you care about your future self’s welfare too? Sure, but maybe not as much as they do, relative to your current welfare!

Confusion about the intent of the paternalistic policy is generated by the fact that it is natural to say “this policy exists to help you”, without noting which instance of ‘you’ it is meant to help – you now, you tomorrow, you in ten years’ time, and so on.

While this justification would make sense especially often where people engaged in ‘hyperbolic discounting’ and as a result were ‘time inconsistent’, it does not rely on that. All it requires is that,

  • there are things you could do now that would benefit your future self, at the expense of your present self, and;
  • the paternalists’ ‘altruistic’ discount rate for the target’s welfare is lower than the discount rate the target has for their own welfare.

The first is certainly true, while the latter is often true in my experience.

In the near-far construal theory often used on this blog, us-now and immediate gratification are both ‘near’, while ourselves in the future, other people, and other people in the future are all ‘far’. In far mode we will want to encourage other folks to act toward their future selves in ways our far view thinks they ought to – usually patiently.

More intuitively: it’s easier to stick to a commitment to help a friend stay on their diet, than it is to stay to our diet ourselves. We don’t enjoy seeing our friends go without ice cream, but we like to see them reach their and our idealised goals even more. As La Rochefoucauld observed, “We all have strength enough to bear the misfortunes of others.” You could add that we all have strength enough to bear the delayed gratification of others.

If a paternalist really does have a lower discount rate in this way, they could justify all kinds of interventions that benefit someone’s future self: preventing suicide, reducing smoking, encouraging exercise, requiring people to save for emergencies and retirement, and so on. I often find these policies distasteful, but as I support a moral discount rate of zero (on valuable experiences), and almost all people are impatient in their own lives, I can’t justify a blanket opposition. We don’t give people an unrestricted freedom to harm their children, or strangers, just because they don’t care much about them. Why then should we give a young woman unrestricted freedom to hurt her far-off 60 year old self, just because they happen to pass through the same body at different points in time? I care about the 60 year old too, perhaps even more than that young woman does, relative to herself.

GD Star Rating
Tagged as: , , , ,

If elections aren’t a Pascal’s mugging, existential risk shouldn’t be either

A response I often hear to the idea of dedicating one’s life to reducing existential risk, or increasing the likelihood of a friendly artificial general intelligence, is that it represents a form of ‘Pascal’s mugging’, a problem memorably described in a dialogue by Nick Bostrom. Because of the absurd conclusion of the Pascal’s mugging case, some people have decided not to trust expected value calculations when thinking about about extremely small likelihoods of enormous payoffs.

While there are legitimate question marks over whether existential risk reduction really does offer a very high expected value, and we should correct for ‘regression to the mean‘, cognitive biases and so on, I don’t think we have any reason to discard these calculations altogether. The impulse to do so seems mostly driven by a desire to avoid the weirdness of the conclusion, rather than actually having a sound reason to doubt it.

A similar activity which nobody objects to on such theoretical grounds is voting, or political campaigning. Considering the difference in vote totals and the number of active campaigners, the probability that someone volunteering for a US presidential campaign will swing the outcome seems somewhere between 1 in 100,000 and 1 in 10,000,000. The US political system throws up significantly different candidates for a position with a great deal of power over global problems. If a campaigner does swing the outcome, they can therefore have a very large and positive impact on the world, at least in subjective expected value terms.

While people may doubt the expected value of joining such a campaign on the grounds that the difference between the candidates isn’t big enough, or the probability of changing the outcome too small, I have never heard anyone say that the ‘low probability, high payoff’ combination means that we must dismiss it out of hand.

What is the probability that a talented individual could averting a major global catastrophic risk if they dedicated their life to it? My guess is it’s only an order of magnitude or two lower than a campaigner swinging an election outcome. You may think this is wrong, but if so, imagine that it’s reasonable for the sake of keeping this blog post short. How large is the payoff? I would guess many many orders of magnitude larger than swinging any election. For that reason it’s a more valuable project in total expected benefit, though also one with a higher variance.

To be sure, the probability and payoff are now very small and very large numbers respectively, as far as ordinary human experience goes, but they remain far away from the limits of zero and infinity. At what point between the voting example, and the existential risk reduction example, should we stop trusting expected value? I don’t see one.

Building in some arbitrary low probability, high payoff ‘mugging prevention’ threshold would lead to the peculiar possibility that for any given project, an individual with probability x of a giant payout could be advised to avoid it, while a group of 100 people contemplating the same project, facing a probability ~100*x of achieving the same payoff could be advised to go for it. Now that seems weird to me. We need a better solution to Pascal’s mugging than that.

GD Star Rating
Tagged as: , , , ,

Significance and motivation

Over at philosophical disquisitions, John Danaher is discussing Aaron Smuts’ response to Bernard Williams’ argument that immortality would be tedious. Smuts’ thesis, in Danaher’s words, is a familiar one:

Immortality would lead to a general motivational collapse because it would sap all our decisions of significance.

This is interestingly at odds with my observations, which suggests that people are much more motivated to do things that seem unimportant, and have to constantly press themselves to do important things once in a while. Most people have arbitrary energy for reading unimportant online articles, playing computer games, and talking aimlessly. Important articles, serious decisions, and momentous conversations get put off.

Unsurprisingly then, people also seem to take more joy from apparently long-run insignificant events. Actually I thought this was the whole point of such events. For instance people seem to quite like cuddling and lazing in the sun and eating and bathing and watching movies. If one had any capacity to get bored of these things, I predict it would happen within the first century. While significant events also bring joy, they seem to involve a lot more drudgery in preceding build up.

So it seems to me that living forever could only take the pressure off and make people more motivated and happy. Except inasmuch as the argument is faulty in other ways, e.g. impending death is not the only time constraint on activities.

Have I missed something?

GD Star Rating
Tagged as: , , , ,

No theory X in shining armour

A frequent topic on this blog is the likely trade-off between a higher population and a higher quality of life at some point in the future. Some people – often total utilitarians – are willing to accept a lower quality of life for our descendants if that means there can be more of them. Others – often average utilitarians – will accept a smaller population if it is required to improve quality of life for those who are left.

Both of these positions lead to unintuitive conclusions if taken to the extreme. On the one hand, total utilitarians would have to accept the ‘repugnant conclusion‘, that a very large number of individuals experiencing lives barely worth living, could be much better than a small number of people experiencing joyous lives. On the other hand, average utilitarians confront the ‘mere addition paradox’; adding another joyous person to the world would be undesirable so long as their life was a little less joyous than the average of those who already existed.

Derek Parfit, pioneer of these ethical dilemmas and author of the classic Reasons and Persons, strived to,

“develop a theory of beneficence – theory X he calls it – which is able to solve the Non-identity problem [1], which does not lead to the Repugnant Conclusion and which thus manages to block the Mere Addition Paradox, without facing other morally unacceptable conclusions. However, Parfit’s own conclusion was that he had not succeeded in developing such a theory.”

Such a ‘theory X’ would certainly be desirable. I am not keen to bite the bullet of either the ‘repugnant conclusion’ or ‘mere addition paradox’ if neither is required. Unfortunately, if like me, you were hoping that such a theory might be forthcoming, you can now give up waiting. I was recently surprised to learn that What should we do about future generations? Impossibility of Parfit’s Theory X by Yew-Kwang Ng (1989) demonstrated many years ago that theory X cannot exist. Continue reading "No theory X in shining armour" »

GD Star Rating
Tagged as: , , ,

Life after death for Pascal’s Wager?

You are probably familiar with Pascal’s Wager – the idea that it is worth believing in God in order to increase your probability of going to heaven and lower your probability of going to hell. More generally, for an expected utility maximiser it will always be worth doing something that offers any probability of an infinite utility, no matter how low that probability.

My impression is that most folks think this argument is nonsense. I am not so sure. I recently met Amanda Montgomery, who is at NYU studying the challenges that infinite values present for decision theory. In her view, nobody has produced a sound solution to Pascal’s Wager and other infinite ethics problems.

A common response, and one I had previously accepted, is that we also need to consider the possibility of a ‘professor God’ who rewards atheists and punishes believers. As long as you place some probability on this being the case, then being an atheist, as well as being a believer, appears to offer an infinite payoff. Therefore it doesn’t matter what you believe.

This logic relies on two premises. Firstly, that a*∞ = b*∞ = ∞ for any a > 0 and b > 0. Secondly, that in ranking expected utility outcomes, we should be indifferent between any two positive probabilities of an infinite utility, even if they are different. That would imply that a certainty of going to ‘Heaven’ was no more desirable than a one-in-a-billion chance. Amanda points out that while these statements may both be true, if you have any doubt that either is true (p < 1), then Pascal’s Wager appears to survive. The part of your ‘credence’ in which a higher probability of infinite utility should be preferred to a lower one will determine your decision and allow the tie to be broken. Anything that made you believe that some kinds of Gods were more likely or easy to appease than others, such as internal consistency or historical evidence, would ensure you were no longer indifferent between them.

Some might respond that it would not be possible to convert sincerely with a ‘Pascalian’ motivation. This might be true in the immediate term, but presumably given time you could put yourself in situations where you would be likely to develop a more religious disposition. Certainly, it would be worth investigating your capacity to change with an infinite utility on the line! And even if you could not sincerely convert, if you believed it was the right choice and had any compassion for others, it would presumably be your duty to set about converting others who could.

On top of the possibility that there is a God, it also seems quite imaginable to me that we are living in a simulation of some kind perhaps as a research project of a singularity that occurred in a parent universe. There is another possible motivation for running such simulations. I am told that if you accept certain decision theories, it would appear worthwhile for future creatures to run simulations of the past, and reward or punish the participants based on whether they acted in ways that were beneficial or harmful to beings expected to live in the future. On realising this, we would then be uncertain whether we were in such a simulation or not, and so would have an extra motivation to work to improve the future. However, given finite resources in their universe, these simulators would presumably not be able to dole out infinite utilities, and so would be dominated, in terms of expected utility, by any ‘supernatural’ creator that could.

Extending this point, Amanda notes the domination of ‘higher cardinality’ infinities over lower cardinalities. The slightest probability of an infinity-aleph-two utility would always trump a certain infinity-aleph-one. I am not sure what to do about that. The issue has hardly been researched by philosophers and seems like a promising area for high impact philosophy. I would appreciate anyone who can resolve these weird results so I can return to worrying about ordinary things!

GD Star Rating
Tagged as: ,

Does life flow towards flow?

Robin recently described how human brain ‘uploads’, even if forced to work hard to make ends meet, might nonetheless be happy and satisfied with their lives. Some humans naturally love their work, and if they are the ones who get copied, the happiness of emulations could be very high. Of course in Robin’s Malthusian upload scenario, evolutionary pressures towards high productivity are very strong, and so the mere fact that some people really enjoy work doesn’t mean that they will be the ones who get copied billions of times. The workaholics will only inherit the Earth if they are the best employees money can buy.

The broader question of whether creatures that are good at surviving, producing and then reproducing tend towards joy or misery is a crucial one. It helps answer whether it is altruistic to maintain populations of wild animals into the future, or an act of mercy to shrink their habitats. Even more importantly, it is the key to whether it is extremely kind or extremely cruel for humans to engage in panspermia and spread Malthusian life across the universe as soon as possible.

There is an abundance of evidence all around us in the welfare of humans and other animals that have to strive to survive in the environments they are adapted to, but no consensus on what that evidence shows. It is hard enough to tell whether another human has a quality of life better than no life at all, let alone determine the same for say, an octopus.

One of the few pieces of evidence I find compelling comes from Mihály Csíkszentmihályi research into the experience he calls ‘flow‘. His work suggests that humans are most productive, and also most satisfied, when they are totally absorbed in a clear but challenging task which they are capable of completing. The conditions suggested as being necessary to achieve ‘flow’ are

  1. “One must be involved in an activity with a clear set of goals. This adds direction and structure to the task.
  2. One must have a good balance between the perceived challenges of the task at hand and his or her ownperceived skills. One must have confidence that he or she is capable to do the task at hand.
  3. The task at hand must have clear and immediate feedback. This helps the person negotiate any changing demands and allows him or her to adjust his or her performance to maintain the flow state.”

Most work doesn’t meet these criteria and so ‘flow’ is not all that common, but it is amongst the best states of mind a human can hope for.

Some people are much more inclined to enter flow than others and if Csíkszentmihályi’s book is to be believed, they are ideal employees – highly talented, motivated and suited to their tasks. If this is the case, people predisposed to experience flow would be the most popular minds to copy as emulations and in the immediate term the flow-inspired workaholics would indeed come to dominate the Earth.

Of course, it could turn out that in the long run, once enough time has passed for evolution to shed humanity’s baggage, the creatures that most effectively do the forms of work that exist in the future will find life unpleasant. But our evolved capacity for flow in tasks that we are well suited for gives us a reason to hope that will not be the case. If it turns out that flow is a common experience for traditional hunter-gatherers then that would make me even more optimistic. And more optimistic again if we can find evidence for a similar experience in other species.

GD Star Rating
Tagged as: , ,