Monthly Archives: December 2021

Innovation Liability Nightmare

When I try to imagine how our civilization might rot and decline over the coming millennia, my thoughts first go to innovation, as that has long been our main engine of growth. And while over the years I’ve often struggled to think of ways to raise the rate of innovation, it seems much easier to find ways to cut it; in general, it is easier to break things than improve them.

For example, we might press on one of our legal system’s key flaws. Today, law does far more to discourage A from harming B than to encourage A to help B. B can often sue A for compensation when A harms B, but A can rarely sue B for compensation when A helped B. Law. Today is mostly a system of brakes, not of engines or accelerators.

This is less of a problem for auto accidents or pandemics, where the most important effects of the most important actions are indeed harms. But it is a much bigger problem in innovation, where the main problem is too little incentive to help. In general, society gains far more from innovations than do the people who push for them. So innovation needs engines, not brakes.

The problem is that even events whose effects are overall beneficial will still have some harmful effects. For example, if you invent a new better mousetrap, you may displace previous mousetrap makers. Or by introducing cars, you may hurt people who supplied or managed horses. So what if our legal system makes it easier to sue people for the harms caused by their innovations?

For example, many have complained lately of negative effects of social media, such as increasing anxiety, decreasing privacy, and passing on “fake” news. And just as legal liability has been a big weapon in recent campaigns against harms from tobacco and pain-killers, liability may well also become a big weapon against social media. Wielded especially strongly against those who have most innovated and developed social media.

Imagine that holding innovators liable for the negative effects of their innovations became more widespread. But without increasing the rewards we allow to innovators for the benefits that they bestow. Together with the trend to increased regulation, this might just become enough to kill the innovation goose that lays our golden egg of growth.

GD Star Rating
loading...
Tagged as: ,

Karnataka Hospital Insurance Experiment

In 2008 I posted on the famous RAND Health Insurance Experiment:

1974 to 1982 the US government spent $50 million to randomly assign 7700 people in six US cities to three to five years each of either free or not free medicine, provided by the same set of doctors. … people randomly given free medicine in the late 1970s consumed 30-40% more medical services, paid one more “restricted activity day” per year to deal with the medical system, but were not noticeably healthier! (More, see also)

I got 60 signatures on a petition then for the “US to publicly conduct a similar experiment again soon, this time with at least ten thousand subjects treated for at least ten years”.

In 2011 I posted on the Oregon Health Insurance Experiment:

Oregon assigned a limited number of available Medicaid slots by lottery. … 8,704 (~30%) [very sick and poor US adults] were enrolled in Medicaid medical insurance. … at most see two years worth of data. … had substantially and significantly better self-reported health. … over two thirds of the health gains … appeared on the very first survey, done before lottery winners got additional medical treatment. (More)

No statistically significant effect on measures of blood pressure, cholesterol, or blood sugar. … did not reduce the predicted risk of a cardiovascular event within ten years and did not significantly change the probability that a person was a smoker or obese. … it reduced observed rates of depression by 30 percent. (More)

Today I report on the new Karnataka Hospital Insurance Experiment:

This study … is amongst the largest health insurance experiments ever conducted … in Karnataka, which spans south to central India. The sample included 10,879 households (comprising 52,292 members) in 435 villages. Sample households were above the poverty line … and lacked other [hospital] insurance. … randomized to one of 4 treatments: free RSBY [= govt hospital] insurance, the opportunity to buy RSBY insurance, the opportunity to buy plus an unconditional cash transfer equal to the RSBY premium, and no intervention. … intervention lasted from May 2015 to August 2018. …

Opportunity to purchase insurance led to 59.91% uptake and access to free insurance to 78.71% uptake. … Across a range of health measures, we estimate no significant impacts on health. … We conducted a baseline survey involving multiple members of each household 18 months before the intervention. We measured outcomes two times, at 18 months and at 3.5 years post intervention. … only 3 (0.46% of all estimated coefficients concerning health outcomes) were significant after multiple-testing adjustments. We cannot reject the hypothesis that the distribution of p-values from these estimates is consistent with no differences (P=0.31). (more)

So a new randomized experiment on ordinary health residents of India had 6.8x as many subjects as the RAND experiment, and also found no net effect on health. It only looked at the effects of hospital treatment, but to many that is the crown jewel of medicine.

Bottom line: we now have more stronger data that on average, more medicine doesn’t improve health. Though of course for people committed to buying useless medicine insurance can cut financial stress. Update your beliefs accordingly.

GD Star Rating
loading...
Tagged as:

Three Types of General Thinkers

Ours is an era of rising ideological fervor, moving toward something like the Chinese cultural revolution, with elements of both religious revival and witch hunt repression. While good things may come of this, we risk exaggeration races, wherein people try to outdo themselves to show loyalty via ever more extreme and implausible claims, policies, and witch indicators.

One robust check on such exaggeration races could be a healthy community of intellectual generalists. Smart thoughtful people who are widely respected on many topics, who can clearly see the exaggerations, see that others of their calibre also see them, and who crave such associates’ respect enough to then call out those exaggerations. Like the child who said the emperor wore no clothes.

So are our generalists up to this challenge? As such communities matter to us for this and many other reasons, let us consider more who they are and how they are organized. I see three kinds of intellectual generalists: philosophers, polymaths, and public intellectuals.

Public intellectuals seem easiest to analyze. Compared to other intellectuals, these mix with and are selected more by a wider public and a wider world of elites, and thus pander more to such groups. They less use specialized intellectual tools or language, their arguments are shorter and simpler, they impress more via status, eloquent language, and cultural references, and they must speak primarily to the topics currently in public talk fashion.

Professional philosophers, in contrast, focus more on pleasing each other than a wider world. Compared to public intellectuals, they are more willing to use specialized language for particular topics, to develop intricate arguments, and to participate in back and forth debates. As the habits and tools that they learn can be applied to a pretty wide range of topics, philosophers are in that sense generalists.

But philosophers are also very tied to their particular history. More so than in other disciplines, particular historical philosophers are revered as heroes and models. Frequent readings and discussions of their classic texts pushes philosophers to try to retain their words, concepts, positions, arguments, and analysis styles.

As I use the term, polymaths are intellectuals who meet the usual qualifications to be seen as expert in many different intellectual disciplines. For example, they may publish in discipline-specific venues for many disciplines. More points for a wider range of disciplines, and for intellectual projects that combine expertise from multiple disciplines. Learning and integrating many diverse disciplines can force them to generalize from discipline specific insights.

Such polymaths tend less to write off topics as beyond the scope of their expertise. But they also just write less about everything, as our society offers far fewer homes to polymaths than to philosophers or public intellectuals. They must mostly survive on the edge of particular disciplines, or as unusually-expert public intellectuals.

If the disciplines that specialize in thinking about X tend to have the best tools and analysis styles for thinking about X, then we should prefer to support and listen to polymaths, compared to other types of generalist intellectuals. But until we manage to fund them better, they are rarely available to hear from.

Public intellectuals have the big advantage that they can better get the larger world to listen to their advice. And while philosophers suffer their historical baggage, they have the big advantage of stable funding and freedoms to think about non-fashionable topics, to consider complex arguments, and to pander less to the public or elites.

Aside from more support for polymaths, I’d prefer public intellectuals to focus more on impressing each other, instead of wider publics or elites. And I’d rather they tried to impress each other more with arguments, than with their eliteness and culture references. As for philosophers, I’d rather that they paid less homage to their heritage, and instead more adopted the intellectual styles and habits that are now common across most other disciples. The way polymaths do. I don’t want to cut all differences, but some cuts seem wise.

As to whether any of these groups will effectively call out the exaggerations of the coming era of ideological fervor, I alas have grave doubts.

I wrote this post as my Christmas present to Tyler Cowen; this topic was the closest I could manage to the topic he requested.

GD Star Rating
loading...
Tagged as: ,

We Don’t Have To Die

You are mostly the mind (software) that runs on the brain (hardware) in your head; your brain and body are tools supporting your mind. If our civilization doesn’t collapse but instead advances, we will eventually be able to move your mind into artificial hardware, making a “brain emulation”. With an artificial brain and body, you could live an immortal life, a life as vivid and meaningful as your life today, where you never need feel pain, disease, grime, and your body always looks and feels young and beautiful. That person might not be exactly you, but they could (at first) be as similar to you as the 2001 version of you was to you today. I describe this future world of brain emulations in great detail in my book The Age of Em.

Alas, this scenario can’t work if your brain is burned or eaten by worms soon. But the info that specifies you is now only a tiny fraction of all the info in your brain and is redundantly encoded. So if we freeze all the chemical processes in your brain, either via plastination or liquid nitrogen, quite likely enough info can be found there to make a brain emulation of you. So “all” that stands between you and this future immortality is freezing your brain and then storing it until future tech improves.

If you are with me so far, you now get the appeal of “cryonics”, which over the last 54 years has frozen ~500 people when the usual medical tech gave up on them. ~3000 are now signed up for this service, and the [2nd] most popular provider charges $28K, though you should budget twice that for total expenses. (The 1st most popular charges $80K.) If you value such a life at a standard $7M, this price is worth it even if this process has only a 0.8% chance of working. Its worth more if an immortal life is worth more, and more if your loved ones come along with you.

So is this chance of working over 0.8%? Some failure modes seem to me unlikely: civilization collapses, frozen brains don’t save enough info, or you die in way that prevents freezing. And if billions of people used this service, there’d be a question of if the future is willing, able, and allowed to revive you. But with only a few thousand others frozen, that’s just not a big issue. All these risks together have well below a 50% chance, in my opinion.

The biggest risk you face then is organizational failure. And since you don’t have to pay them if they aren’t actually able to freeze you at the right time, your main risk re your payment is re storage. Instead of storing you until future tech can revive you, they might instead mismanage you, or go bankrupt, allowing you to thaw. This already happened at one cryonics org.

If frozen today, I judge your chance of successful revival to be at least 5%, making this service worth the cost even if you value such an immortal future life at only 1/6 of a standard life. And life insurance makes it easier to arrange the payment. But more important, this is a service where the reliability and costs greatly improve with more customers. With a million customers, instead of a thousand, I estimate cost would fall, and reliability would increase, each by a factor of ten.

Also, with more customers cryonics providers could afford to develop plastination, already demonstrated in research, into a practical service. This lets people be stored at room temperature, and thus ends most storage risk. Yes, with more customers, each might need to also pay to have future folks revive them, and to have something to live on once revived. But long time delays make that cheap, and so with enough customers total costs could fall to less than that of a typical funeral today. Making this a good bet for most everyone.

When the choice is between a nice funeral for Aunt Sally or having Aunt Sally not actually die, who will choose the funeral? And by buying cryonics for yourself, you also help move us toward the low cost cryonics world that would be much better for everyone. Most people prefer to extend existing lives over creating new ones.

Thus we reach the title claim of this post: if we coordinated to have many customers, it would be cheap for most everyone to not die. That is: most everyone who dies today doesn’t actually need to die! This is possible now. Ancient Egypt, relative rationalists among the ancients, paid to mummify millions, a substantial fraction of their population, and also a similar number of animals, in hope of later revival. But we now actually can mummify to allow revival, yet we have only done that to 500 people, over a period when over 4 billion people have died.

Why so few cryonics customers? When I’ve taught health economics, over 10% of students judge the chances of cryonics working to be high enough to justify a purchase. Yet none ever buy. In a recent poll, 31.5% of my followers said they planned to sign up, but few have. So the obstacle isn’t supporting beliefs, it is the courage to act on such beliefs. It looks quite weird to act on a belief in cryonics. So weird that spouses often divorce those who do. (But not spouses who spend a similar amounts to send their ashes into space, which looks much less weird.) We like to think we tolerate diversity, and we do for unimportant stuff, but for important stuff we in fact impose strongly penalize diversity.

Sure it would help if our official medical experts endorsed the idea, but they are just as scared of non-conformity, and also stuck on a broken concept of “science” which demands someone actually be revived before they can declare cryonics feasible. Non-medical scientists like that would insist we can’t say our sun will burn out until it actually does, or that rockets could take humans to Mars until a human actually stands on Mars. The fact that their main job is to prevent death and they could in fact prevent most death doesn’t weigh much on them relative to showing allegiance to a broken science concept.

Severe conformity pressures also seem the best explanation for the bizarre range of objections offered to cryonics, objections that are not offered re other ways to cut death rates. The most common objection offered is just that it seems “unnatural”. My beloved colleague Tyler said reducing your death rate this way is selfish, you might be tortured if you stay alive, and in an infinite multiverse you can never die. Others suggest that freezing destroys your soul, that it would hurt the environment, that living longer would slows innovation, that you might be sad to live in a world different from that of your childhood, or that it is immoral to buy products that not absolutely everyone can afford.

While I wrote a pretty similar post a year ago, I wrote this as my Christmas present to Alex Tabarrok, who requested this topic.

Added 17Dec: The chance the future would torture a revived you is related to the chance we would torture an ancient revived today:

Answers were similar re a random older person alive today. And people today are actually tortured far less often than this suggests, as we organize society to restrain random individual torture inclinations. We should expect the future to also organize to prevent random torture, including of revived cryonics patients.

Also, if their were millions of such revived people, they could coordinate to revive each other and to protect each other from torture. Torture really does seem a pretty minor issue here.

GD Star Rating
loading...
Tagged as: ,

How Group Minds Differ

We humans have remarkable minds, minds more capable in many ways that in any other animal, or any artificial system so far created. Many give a lot of thought to the more capable artificial “super-intelligences” that we will likely create someday. But I’m more interested now in the “super-intelligences” that we already have: group minds.

Today, groups of humans together form larger minds that are in many ways more capable than individual minds. In fact, the human mind evolved mainly to function well in bands of 20-50 foragers, who lived closely for many years. And today the seven billion of us are clumped together in many ways into all sorts of group minds.

Consider a four-way classification:

  1. Natural – The many complex mechanisms we inherit from our forager ancestors enable us to fluidly and effectively manage small tightly-interacting group minds without much formal organization.
  2. Formal – The formal structures of standard organizations (i.e., those with “org charts”) allow much larger group minds for firms, clubs, and governments.
  3. Mobs = Loose informal communities structured mainly by simple gossip and status, sometimes called “mobs”, often form group minds on vast, even global, scales.
  4. Special – Specialized communities like academic disciplines can often form group minds on particular topics using less structure.

A quick web search finds that many embrace the basic concept of group minds, but I found few directly addressing this very basic question: how do group minds tend to differ from individual human minds? The answer to this seems useful in imagining futures where group minds matter even more than today.

In fact, future artificial minds are likely to be created and regulated by group minds, and in their own image, just as the modularity structure of software today usually reflects the organization structure of the group that made it. The main limit to getting better artificial minds later might be in getting better group minds before then.

So, how do group minds differ from individual minds? I can see several ways. One obvious difference is that, while human brains are very parallel computers, when humans reason consciously, we tend to reason sequentially. In contrast, large group minds mostly reason in parallel. This can make it a little harder to find out what they think at any one time.

Another difference is that while human brains are organized according to levels of abstraction, and devote roughly similar resources to different abstraction levels, standard formal organizations devote far fewer resources to higher levels of abstraction. It is hard to tell if mobs also suffer a similar abstract-reasoning deficit.

As mobs lack centralized coordination, it is much harder to have a discussion with a mob, or to persuade a mob to change its mind. It is hard to ask a mob to consider a particular case or argument. And it is especially hard to have a Socratic dialogue with a mob, wherein you ask it questions and try to get it to admit that different answers it has given contradict each other.

As individuals in mobs have weaker incentives regarding accuracy, mobs try less hard to get their beliefs right. Individual in mobs instead have stronger incentives to look good and loyal to other mob members. So mobs are rationally irrational in elections, and we created law to avoid the rush-to-judgment failures of mobs. As a result, mobs more easily get stuck on particular socially-desirable beliefs.

When each person in the mob wants to show their allegiance and wisdom by backing a party line, it is harder for such a mob to give much thought to the possibility that its party line might be wrong. Individual humans, in contrast, are better able to systematically consider how they might be wrong. Such thoughts more often actually induce them to change their minds.

Compared to mobs, standard formal orgs are at least able to have discussions, engage arguments, and consider that they might be wrong. However, as these happen mostly via the support of top org people, and few people are near that top, this conversation capacity is quite limited compared to that of individuals. But at least it is there. However such organizations also suffer from main known problems, such as yes-men and reluctance to pass bad news up the chain.

At the global level one of the big trends over the last few decades is away from the formal org group minds of nations, churches, and firms, and toward the mob group mind of a world-wide elite. Supported by mob-like expert group minds in academia, law, and media. Our world is thus likely to suffer more soon from mob mind inadequacies.

Prediction markets are capable of creating fast-thinking accurate group minds that consider all relevant levels of abstraction. They can even be asked questions, though not as fluidly and easily as can individuals. If only our mob minds didn’t hate them so much.

GD Star Rating
loading...
Tagged as: , , ,

What Hypocrisy Feels Like

Our book The Elephant in the Brain argues that there are often big differences between the motives by which we sincerely explain our behavior, and the motives that more drive and shape that behavior. But even if this claim seems plausible to you in the abstract, you might still not feel fully persuaded, if you find it hard to see this contrast clearly in a specific example.

That is, you might want to see what hypocrisy feels like up close. To see the two different kinds of motives in you in a particular case, and see that you are inclined to talk and think in terms of the first, but see your concrete actions being more driven by the second.

If so, consider the example of utopia, or heaven. When we talk about an ideal world, we are quick to talk in terms of the usual things that we would say are good for a society overall. Such as peace, prosperity, longevity, fraternity, justice, comfort, security, pleasure, etc. A place where everyone has the rank and privileges that they deserve. We say that we want such a society, and that we would be willing to work and sacrifice to create or maintain it.

But our allegiance to such a utopia is paper thin, and is primarily to a utopia described in very abstract terms. Our abstract thoughts about utopia generate very little emotional energy in us, and our minds quickly turn to other topics. In addition, as soon as someone tries to describe a heaven or utopia in vivid concrete terms, we tend to be put off or repelled. Even if such a description satisfies our various abstract good-society features, we find reasons to complain. No, that isn’t our utopia, we say. Even if we are sure to go to heaven if we die, we don’t want to die.

And this is just what near-far theory predicts. Our near and far minds think differently, with our far minds presenting a socially desirable image to others, and our near minds more in touch with what we really want. Our far minds are more in charge when we are prompted to think abstractly and hypothetically, but our near minds are more in charge when we privately make real concrete choices.

Evolved minds like ours really want to win the evolutionary game. And when there are status hierarchies tied to evolutionary success, we want to rise in those hierarchies. We want to join a team, and help that team win, as long as that team will then in turn help us to win. And we see all this concretely in the data; we mainly care about our social rank:

The outcome of life satisfaction depends on the incomes of others only via income rank. (Two followup papers find the same result for outcomes of psychological distress and nine measures of health.) They looked at 87,000 Brits, and found that while income rank strongly predicted outcomes, neither individual (log) income nor an average (log) income of their reference group predicted outcomes, after controlling for rank (and also for age, gender, education, marital status, children, housing ownership, labor-force status, and disabilities). (more)

But this isn’t what we want to think, or to say to others. With our words, and with other very visible cheap actions, we want to be pro-social. That is, we want to say that we want to help society overall. Or at least to help our society. While we really crave fights by which we might rise relative to others, we want to frame those fights in our minds and words as fighting for society overall, such as by fighting for justice against the bad guys.

And so when the subject of utopia comes up, framed abstractly and hypothetically, we first react with our far minds: we embrace our abstract ideals. We think we want them embodied in a society, and we think we want to work to create that society. And our thoughts remain this way as long as the discussion remains abstract, and we aren’t at much risk of actually incurring substantial supporting personal costs.

But the more concrete the discussion gets, and the closer to asking for concrete supporting actions, the more we recoil. We start to imagine a real society in detail wherein we don’t see good opportunities for our personal advancement over others. And where we don’t see injustices which we could use as excuses for our fights. And our real motivations, our real passions, tell us that they have reservations; this isn’t the sort of agenda that we can get behind.

So there it is: your hypocrisy up close and personal, in a specific case. In the abstract you believe that you like the idea of utopia, but you recoil at most any concrete example. You assume you have a good pro-social reason for your recoil, and will mention the first candidate that comes to your head. But you don’t have a good reason, and that’s just what hypocrisy feels like. Utopia isn’t a world where you can justify much conflict, but conflict is how you expect to win, and you really really want to win. And you expect to win mainly at others’ expense. That’s you, even if you don’t like to admit it.

GD Star Rating
loading...
Tagged as:

Coming Commitment Conflicts

If competition, variation, and selection long continues, our worlds will become dominated by artificial creatures who take a long view of their future, and who see themselves as directly and abstractly valuing having more distant descendants. Is there anything more we robustly predict about them?

Our evolving descendants will form packages wherein each part of the package promotes reproduction of other package parts. So a big question is: how will they choose their packages? While some package choices will become very entrenched, like the different organs in our bodies, other choices may be freer to change at the last minute, like political coalitions in democracies. How will our descendants choose such coalition partners?

One obvious strategy is to make deals with coalition partners to promote each other’s long term reproduction. Some degree of commitment is probably optimal, and many technologies of commitment will likely be available. But note: it is probably possible to over-commit, by committing too wide a range of choices over too long a time period with too many partners, and to under-commit, committing too few choices over too short a time period with too few partners. Changed situations call for changed coalitions. Thus our descendants will have to think carefully about how strongly and long to commit on what with whom.

But is it even possible to enforce deals to promote the reproduction of a package? Sure, the amount of long-term reproduction of a set of features or a package subset seems a clearly measurable outcome, but how could such a team neutrally decide which actions best promote that overall package? Wouldn’t the detailed analyses that each package part offers on such a topic tend to be biased to favor those parts? If so, how could they find a neutral analyses to rely on?

My work on futarchy lets me say: this is a solvable problem. Because we know that futarchy would solve this. A coalition could neutrally but expertly decide what actions would promote their overall reproduction by choosing a specific ex-post-numeric-measure of their overall reproduction, and then creating decision markets to advise on each particular decision where concrete identifiable options can be found.

There may be other ways to do this, and some ways may even be better than decision markets. But it clearly is possible for future coalitions to neutrally and expertly decide what shared actions would promote their overall reproduction. So as long as they can make such actions visible to something like decisions markets, coalitions can reliably promote their joint reproduction.

Thus we can foresee an important future activity: forming and reforming reproduction coalitions.

GD Star Rating
loading...
Tagged as: ,

On Evolved Values

Biological evolution selects roughly for creatures that do whatever it takes to have more descendants in the long run. When such creatures have brains, those brains are selected for having supporting habits. And to the extent that such brains can be described as having beliefs and values that combine into actions via expected utility theory, then these beliefs and values should be ones which are roughly behaviorally-equivalent to the package of having accurate beliefs, and having values to produce many descendants (relative to rivals). Equivalent at least within the actual environments in which those creatures were selected.

Humans have unusually general brains, with which we can think unusually abstractly about our beliefs and values. But so far, we haven’t actually abstracted our values very far. We instead have a big mess of opaque habits and desires that implicitly define our values for us, in ways that we poorly understand. Even though what evolution has been selecting for in us can in fact be described concisely and effectively in an abstract way.

Which leads to one of the most disturbing theoretical predictions I know: with sufficient further evolution, our descendants are likely to directly and abstractly know that they simply value more descendants. In diverse and varying environments, such a simpler more abstract representation seems likely to be more effective at helping them figure out which actions would best achieve that value. And while I’ve personally long gotten used to the idea that our distant descendants will be weird, to (the admittedly few) others who care about the distant future, this vision must seem pretty disturbing.

Oh there are some subtleties regarding whether all kinds of long-term descendants get the same weight, to what degree such preferences are non-monotonic in time and number of descendants, and whether we care the same about risks that are correlated or not across descendants. But those are details: evolved descendants should more simply and abstractly value more descendants.

This applies whether our descendants are biological or artificial. And it applies regardless of the kind of environments our descendants face, as long as those environments allow for sufficient selection. For example, if our descendants live among big mobs, who punish them for deviations from mob-enforced norms, then our descendants will be selected for pleasing their mobs. But as an instrumental strategy for producing more descendants. If our descendants have a strong democratic world government that enforces rules about who can reproduce how, then they will be selected for gaining influence over that government in order to gain its favors. And for an autocratic government, they’d be selected for gaining its favors.

Nor does this conclusion change greatly if the units of future selection are larger than individual organisms. Even if entire communities or work teams reproduce together as single units, they’d still be selected for valuing reproduction, both of those entire units and of component parts. And if physical units are co-selected with supporting cultural features, those total physical-plus-cultural packages must still tend to favor the reproduction of all parts of those packages.

Many people seem to be confused about cultural selection, thinking that they are favored by selection if any part of their habits or behaviors is now growing due to their actions. But if, for example, your actions are now contributing to a growing use of the color purple in the world, that doesn’t at all mean that you are winning the evolutionary game. If wider use of purple is not in fact substantially favoring the reproduction of the other elements of the package by which you are now promoting purple’s growth, and if those other elements are in fact reproducing less than their rivals, then you are likely losing, not winning, the evolutionary game. Purple will stop growing and likely decline after those other elements sufficiently decline.

Yes of course, you might decide that you don’t care that much to win this evolutionary game, and are instead content to achieve the values that you now have, with the resources that you can now muster. But you must then accept that tendencies like yours will become a declining fraction of future behavior. You are putting less weight on the future compared to others who focus more on reproduction. The future won’t act like you, or be as much influenced by acts like yours.

For example, there are “altruistic” actions that you might take now to help out civilization overall. For example, you might build a useful bridge, or find some useful invention. But if by such actions you hurt the relative long-term reproduction of many or most of the elements that contributed to your actions, then you must know you are reducing the tendency of descendants to do such actions. Ask: is civilization really better off with more such acts today, but fewer such acts in the future?

Yes, we can likely identify some parts of our current packages which are hurting, not helping, our reproduction. Such as genetic diseases. Or destructive cultural elements. It makes sense to dump such parts of our reproduction “teams” when we can identify them. But that fact doesn’t negate the basic story here: we will mainly value reproduction.

The only way out I see is: stop evolution. Stop, or slow to a crawl, the changes that induce selection of features that influence reproduction. This would require a strong civilization-wide government, and it only works until we meet the other grabby aliens. Worse, in an actually changing universe, such stasis seems to me to seriously risk rot. Leading to a slowly rotting civilization, clinging on to its legacy values but declining in influence, at least relative to its potential. This approach doesn’t at all seems worth the cost to me.

But besides that, have a great day.

Added 7p: There many be many possible equilibria, in which case it may be possible to find an equilibrium in which maximizing reproduction also happens to maximize some other desired set of values. But it may be hard to maintain the context that allows that equilibrium over long time periods. And even if so, the equilibrium might itself drift away to support other values.

Added 8Dec: This basic idea expressed 14 years ago.

GD Star Rating
loading...
Tagged as: , ,