Monthly Archives: May 2018

Two Types of Future Filters

In principle, any piece of simple dead matter in the universe could give rise to simple life, then to advanced life, then to an expanding visible civilization. In practice, however, this has not yet happened anywhere in the visible universe. The “great filter” is sum total of all the obstacles that prevent this transition, and our observation of a dead universe tells us that this filter must be enormous.

Life and humans here on Earth have so far progressed some distance along this filter, and we now face the ominous question: how much still lies ahead? If the future filter is large, our changes of starting an expanding visible civilization are slim. While being interviewed on the great filter recently, I was asked what I see as the most likely future filter. And in trying to answer, I realized that I have changed my mind.

The easiest kind of future filter to imagine is a big external disaster that kills all life on Earth. Like a big asteroid or nearby supernovae. But when you think about it, it is very hard to kill all life on Earth. Given how long Earth as gone without such an event, the odds of it happening in the next millions years seems quite small. And yet a million years seems plenty of time for us to start an expanding visible civilization, if we were going to do that.

Yes, compared to killing all life, we can far more easily imagine events that destroy civilization, or kill all humans. But the window for Earth to support life apparently extends another 1.5 billion years into our future. As that window duration should roughly equal the typical duration between great filter steps in the past, it seems unlikely that any such steps have occurred since a half billion years ago, when multicellular life started becoming visible in the fossil record. For example, the trend toward big brains seems steady enough over that period to make big brains unlikely as a big filter step.

Thus even a disaster that kills most all multicellular life on Earth seems unlikely to push life back past the most recent great filter step. Life would still likely retain sex, Eukaryotes, and much more. And with 1.5 billion years to putter, life seems likely to revive multicellular animals, big brains, and something as advanced as humans. In which case there would be a future delay of advanced expanding life, but not a net future filter.

Yes, this analysis is regarding “try-try” filter steps, where the world can just keep repeatedly trying until it succeeds. In principle there can also be “first or never” steps, such as standards that could in principle go many ways, but which lock in forever once they pick a particular way. But it still seems hard to imagine such steps in the last half billion years.

So far we’ve talked about big disasters due to external causes. And yes, big internal disasters like wars are likely to be more frequent. But again the problem is: a disaster that still leaves enough life around could evolve advanced life again in 1.5 billion years, resulting in only a delay, not a filter.

The kinds of disasters we’ve been considering so far might be described as “too little coordination” disasters. That is, you might imagine empowering some sort of world government to coordinate to prevent them. And once such a government became possible, if it were not actually created or used, you might blame such a disaster in part on our failing to empower a world government to prevent them.

Another class of disasters, however, might be described as “too much coordination” disasters. In these scenarios, a powerful world government (or equivalent global coalition) actively prevents life from expanding visibly into the universe. And it continues to do so for as long as life survives. This government might actively prevent the development of technology that would allow such a visible expansion, or it might allow such technology but prevent its application to expansion.

For example, a world government limited to our star system might fear becoming eclipsed by interstellar colonists. It might fear that colonists would travel so far away as to escape the control of our local world government, and then they might collectively grow to become more powerful than the world government around our star.

Yes, this is not a terribly likely scenario, and it does seem hard to imagine such a lockdown lasting for as long as does advanced civilization capable of traveling to other stars. But then scenarios where all life on Earth gets killed off also seem pretty unlikely. It isn’t at all obvious to me that the too little coordination disasters are more likely than the too much coordination disasters.

And so I conclude that I should be in-the-ballpark-of similarly worried about both categories of disaster scenarios. Future filters could result from either too little or too much coordination. To prevent future filters, I don’t know if it is better to have more or less world government.

GD Star Rating
loading...
Tagged as: , ,

More Than Death, Fear Decay

Most known “systems” decay, rot, age, and die. We usually focus on the death part, but the more fundamental problem is decay (a.k.a. rotting, aging). Death is almost inevitable, as immortality is extremely difficult to achieve. Systems that don’t decay can still die; we sometimes see systems where the chance of death stays constant over time. But for most complex systems, the chance of death rises with time, due to decay.

Many simple physical systems, like chairs, decay because the materials of their parts decay. Such systems can often be rejuvenated by replacing those materials. More generally, simple modular systems can be rejuvenated by replacing the modular parts that decay. For example, it is possible to spend enough to maintain most cars and buildings indefinitely in a nearly original condition, though we rarely see this as worth the bother.

Complex adaptive systems (CAS), such as firms, have many parts in complex relations, relations that change in an attempt to adapt to changing conditions. When a CAS changes its design and structure to adapt, however, this rarely results in modular sub-designs that can be swapped out. Alas, the designs of most known CAS decay as they adapt. In biological organisms this is called “aging”, in software it is called “rot”, and in product design this is called the “innovators dilemma”. Human brains change from having “fluid” to “crystalized” intelligence, and machine learning systems trained in one domain usually find it harder to learn quite different domains. We also see aging in production plans, firms, empires, and legal systems. I don’t know of data on whether things like cities, nations, professions, disciplines, languages, sports, or art genres age. But it isn’t obvious that they don’t also decay.

It is not just that it is easier to create and train new CAS, relative to rejuvenating old ones. It seems more that we just don’t know how to prevent rot at any remotely reasonable cost. In software, designers often try to “refactor” their systems to slow the process of aging. And sometimes such designers report that they’ve completely halted aging. But these exceptions are mostly in systems that are small and simple, with stable environments, or with crazy amounts of redesign effort.

However, I think we can see at least one clear exception to this pattern of rotting CAS: some generalist species. If the continually changing environment of Earth caused all species to age at similar rates, then over the history of life on Earth we would see a consistent trend toward a weaker ability of life to adapt to changing conditions. Eventually life would lose its ability to sufficient adapt, and life would die out. If some kinds of life could survive in a few very slowly changing garden environments, then eventually all life would descend from the stable species that waited unchanging in those few gardens. The longer it had been since a species had descended from a stable garden species, the faster that species would die out.

But that isn’t what we see. Instead, while species that specialize to particular environments do seem to go extinct more easily, generalist species seem to maintain their ability to adapt across eons, even after making a great many adaptations. Somehow, the designs of generalist species do not seem to rot, even though typical organisms within that species do rot. How do they do that?

It is possible that biological evolution has discovered some powerful design principles of which we humans are still ignorant. If so, then eventually we may learn how to cheaply make CAS that don’t rot. But in this case, why doesn’t evolution use those anti-rot design principles to create individual organisms that don’t decay or age? Evolution seems to judge it much more cost effective to make individual organisms that rot. A more likely hypothesis is that there is no cheap way to prevent rot; evolution has just continually paid a large cost to prevent rot. Perhaps early on, some species didn’t pay this cost, and won for a while. But eventually they died from rot, leaving only non-rotting species to inherit the Earth. It seems there must be some level in a system that doesn’t rot, if it is to last over the eons, and selection has ensured that the life we now see has such a level.

If valid, this perspective suggests a few implications for the future of life and civilization. First, we should seriously worry about which aspects of our modern civilization system are rotting. Human culture has lasted a million years, but many parts of our modern world are far younger. If the first easiest version of a system that we can find to do something is typically be a rotting system, and if it takes a lots more work to find a non-rotting version, should we presume that most of the new systems we have are rotting versions? Farming-era empires consistently rotted; how sure can we be that our world-wide industry-era empire isn’t similarly rotting today? We may be accumulating a technical debt that will be expensive to repay. Law and regulation seem to be rotting; should we try to induce a big refactoring there? Should we try to create and preserve contrarian subcultures or systems that are less likely to crash with the dominant culture and system?

Second, we should realize that it may be harder than we thought to switch to a non-biological future. We humans are now quite tied to the biosphere, and would quickly die if biology were to die. But we have been slowly building systems that are less closely tied to biology. We have been digging up materials in mines, collecting energy directly from atoms and the Sun, and making things in factories. And we’ve started to imagine a future where the software in our brains is copied into factory-made hardware, i.e., ems, joined there by artificial software. At which point our descendants might no longer depending on biological systems. But replacing biological systems with our typically rotting artificial systems may end badly. And making artificial systems that don’t rot may be a lot more expensive and time-consuming that we’ve anticipated.

Some imagine that we will soon discover a simple powerful general learning algorithm, which will enable us to make a superintelligence, a super-smart hyper-consistent eternal mind with no internal conflicts and an arbitrary abilities to indefinitely improve itself, make commitments, and preserve its values. This mind would then rule the universe forever more, at least until it met its alien equivalent. I expect that these visions have not sufficiently considered system rot, among other issues.

In my first book I guessed that during the age of em, individual ems would become fragile over time, and after a few subjective centuries they’d need to be replaced by copies of fresh scans of young humans. I also guessed that eventually it would become possible to substantially redesign brains, and that the arrival of this ability might herald the start of the next age after the age of em. If this requires figuring out how to make non-rotting versions of these new systems, the age of em might last even longer than one would otherwise guess.

GD Star Rating
loading...
Tagged as: , ,

Why Not Thought Crime?

I was alarmed to find a quotation supporting child rapists falsely attributed to me & going viral on Twitter. … messages shaming me for supporting child rapists. … I tweeted a clarification about the falsehood to no avail. (more)

Galileo’s Middle Finger is one American’s eye-opening story of life in the trenches of scientific controversy. … Dreger began to realize how some fellow progressive activists were employing lies and personal attacks to silence scientists whose data revealed uncomfortable truths about humans. In researching one such case, Dreger suddenly became the target of just these kinds of attacks. (more)

In 1837 Abraham Lincoln wrote about lynching and “the increasing disregard for law which pervades the country—the growing disposition to substitute the wild and furious passions in lieu of the sober judgment of courts, and the worse than savage mobs for the executive ministers of justice”. (more)

For a million years, humans lived under mob rule. We gossiped about rule violations, and then implemented any verdicts as a mob. Mob rule worked well enough in forager bands of population 20-50, but less well in farming era village areas of population 300-3000, and they work even worse today. Instead of a single unified conversation around a campfire, where everyone could be heard, larger mob conversations fragment into many separated smaller conversations. As the accused doesn’t have time or access to defend themselves in these many discussions, most in the mob only hear other voices. So mob rule comes down to whether most others are inclined to speak well or ill of the accused. And, alas, for an accused that many don’t like, mob members are often more eager to display personal outrage at anyone who might do what was accused, than they are to determine if the accused was actually guilty.

And so we developed law. When someone was accused of a violation, a legal authority authorized an open debate between the accused and a focal accuser. While such debates had many flaws, they had the great virtue of giving substantial and roughly equal time to an accuser and the accused. Where a mob might accept false accusations and false claims of innocence because they are not willing to listen to long detailed explanations, law listens more, and thus can eliminate many mistaken conclusions. Today, when an official prosecutor is assigned the task of convicting as many criminals as possible, the fact that this prosecutor declines to prosecute a particular accusation is often reasonably taken as exoneration.

However, we still use mob rule today for people accused of things that are widely socially disapproved, but not illegal. While the mob’s verdict is not enforced directly via law, punishments can still be severe, such as loss of jobs and friends, and even illicit violence. Which raises the obvious question: why not make mob-disapproved behaviors illegal, so that law can overcome the problem of error-prone fragmented mob conversations? If the official legal punishment were set to be comparable to what would have been the mob punishment, isn’t it a net win to use a more accurate process of determining guilt?

You might think that mobs shouldn’t be censuring so many things, but unless you are willing to more actively discourage such mobs, the real choice may be between mob and legal adjudication. Legal adjudication of an accusation does seem to cut the eagerness for mob rule on it, even if this doesn’t always eliminate mob activity. You might say that law has costs, and so should be reserved for big enough harms. But obviously mobs think these acts are big enough to bother to organize to censure them. The cost of making mobs seems at least comparable to the cost of using law. You might note that accusations are often hard to prove, but we make many things illegal that are hard to prove. If law can’t prove an accusation well enough to declare guilt, why trust an even more error-prone mob process to determine guilt? If you think that the errors of mobs declaring guilt are tolerable even when the law refuses to declare guilt, then you think law demands overly strong proofs. If so, we should change legal standards of proof to fix that.

It makes more sense to use mobs when society is honestly split into groups that differ on which acts should be approved or disapproved. For example, if one big group thinks people should be praised for promoting economic growth, while another similar sized group thinks people should be censured for promoting economic growth, then we may not want our legal system to take a side in this dispute. But mob rule today often censures people for things of which almost everyone disapproves. Like strong racism or sexism, or promoting rape. If over 99 percent of citizens disapprove of some behavior, maybe it is time to introduce official legal sanctions against that behavior.

At least twice in my life I’ve been subject to substantial mob rule censure. Fifteen years ago my DARPA-funded project was publicly accused by two senators of encouraging people to bet on the deaths of allies; the next morning the Secretary of Defense announced before Congress that my project was cancelled. In the last month, I was accused of promoting rape, and widely censured for that, receiving many hostile messages and threats, and having people and groups cut off public association with me.

In both cases I’m confident that law-like debate would have exonerated me. My DARPA-funded project, Policy Analysis Market, was going to have bets on geopolitical instability in the Mideast, not terror attacks. (Over 500 media articles mentioned the project in the coming years, and articles that knew more liked it more.) And recently I asked why there is so little overlap between those who seek more income and sex redistribution. I didn’t advocate either one, and “redistribute” just mean “change the distribution” (look it up in any dictionary); there are as many ways to change the distribution of sex without rape as there are to change the distribution of income without using guillotines like in the French revolution. (Eight years ago I also compared another bad thing to rape, to say how bad that other thing might be, not to say rape is good.)

I would personally have been better off had these things been thought crimes, as I could have then more effectively defended myself against false accusations. And I’ve learned of many other cases of mob rule punishing people based on false accusations. So I am led to wonder: why not thought crime? It might not be the best of all possible worlds, but couldn’t it be better than the mob rule we now use?

Added 7a: When mobs have mattered, the choice has often been between sufficiently suppressing them or creating laws that substitute for what they would have done. See some history.

GD Star Rating
loading...

Revival Prizes

Cryonics is the process of having your body frozen when current medicine gives up on you, and calls you “dead”, in the hope of being revived later using much better future medicine. Even though cryonics has been available for many decades, and often receives free international publicity, only ~3000 people have signed up as customers, and only ~400 people have been frozen. I’m one of those customers. While many customers hope to have their current physical body fixed and restored to youthful health, I’m mainly hoping to be revived as an em, which seems to me a vastly easier (if still very hard) task.

Imagine you plan to become a cryonics patient, and hope for an eventual successful revival. Along this path many important decisions will need to be made: level of financial investment into the whole process, timing and method of preservation, method and place of storage, strategies of financial asset investment, and final timing and method of revival and reintegration into society. Through most of this process you will not be available to make key decisions, though after success you might be able to give an evaluation of the choices that were made on your behalf. So you will need to delegate many of these choices to agents who make these choices for you. How can you set up your relation to such agents to give them the best possible incentives to make good choices?

Several US states allow you to deposit money into a “trust”, which then can grow indefinitely by reinvestment without paying taxes on investment gains, even after you are officially dead. The usual legal process is to assign an “administrator” to manage the trust. Usually, you write down your preferences in words, and then pay this agent a constant percentage of your current assets to follow your instructions. In theory they do what you wanted out of fear of being sued. Unfortunately, its hard to prove a violation, and few would have the incentive to bother. This gives your agent the incentive to minimize all spending except reinvestment of the assets, or to divert spending or investments to parties who pay them a kickback. Either way, not a great system.

Here’s an improvement. Pay the agent only some fraction of the money left over in the fund after you are successfully revived. A prize for revival. Then they never get anything until you get what you wanted. Of course this requires some legal way to determine that you have in fact been revived. Instead of, for example, being replaced with some crude simulation of you. This approach seems better than the previous one, but there’s still the problem that this prize incentive makes them want to wait too long. Why risk any chance of failure, and why pay a high cost for revival, if you can just wait longer to raise the chance of success and lower the cost? So this agent will get it done eventually, but may wait too long. And they might not revive you they way you wanted.

One simple fix is that, once you are revived, you rate the whole process on a 0 to 100 scale, and your agent only gets that percentage of the max possible prize. (Maybe also guarantee that they get some min faction.) The rest of the prize can’t go to you, or your incentives are bad. So the rest of the prize would have to go to some specified charity, perhaps a pool of assets to help all other cryonics customers still not yet revived. Your agent will then try to make choices so that you will rate them highly after you are revived. You can expect them to choose a revival process where they give themselves advantages in convincing you that they did a good job. Perhaps even mind control. So steel yourself to be skeptical. They might also discretely threaten to “accidentally” lose you if you don’t pay them the full prize. So beware of that.

You might be able to do just a bit better by committing to a schedule by which the maximum prize your agent could win declines as a fraction of the total assets remaining after revival. Such a decline would encourage the agent to not wait too long to revive you. But if you don’t know the relevant rates of future change, how can you robustly define such a prize fraction decline? One robust measure available is the number of people who have been successfully revived so far. Your schedule of decline might not even start until at least one person has been revived, and then decline as some function of the number revived so far. Perhaps the function could be a simple power law. So you could specify how eager you are to be one of the first people revived.

So here’s my final proposal. You choose how much money to deposit in a trust, you write down your preferences as best you know them now, and you pick an agent who agrees to manage your trust, and make key storage and revival decisions. You agree to pay them some percent of current assets per year (preferably zero), and some max fraction of final remaining assets after revival to pay them as a prize. This max fraction follows some simple declining function of the number of people revived so far at that time. Perhaps a power law. And you have the discretion when revived to pay them less than this max value, with the remainder going to a specified charity. You initially choose the key parameters of this system to reflect your personal preferences, as best you can.

This is of course far from perfect. Problems remain, such as of kickbacks, theft, fake revival, and mind control. So there could be a place for a larger encompassing organization to watch out for and avoid such problems. And to publish stats on revivals and attempts so far. This larger organization could approve the basic range of reasonable options from which agents could choose at any one time, and have extra powers to monitor and overrule rogue agents. But it should mostly defer to the judgements of individual agents.

I can imagine a futarchy-based variation, where the “agent” is a pool of speculators who bet on shares of the final prize, conditional on making particular choices. This would cut the problem of random variation in the quality and even sanity of individual agents. But I can’t claim that futarchy is well enough tested now to make this a reasonable option if you are making these choices right now. However, I’d love to help a group do such testing, to see if it can become a viable option sooner.

Added 10:30a: It could also make sense to make your declining prize fraction function depend on the ratio of successful revivals so far to attempts that fail so badly as to make future revival seems impossible.

GD Star Rating
loading...
Tagged as:

Radical Markets

In 1997, I got my Ph.D. in social science from Caltech. The topic that drew me into grad school, and much of what I studied, was mechanism and institution design: how to redesign social practices and institutions. Economists and related scholars know a lot about this, much of which is useful for reforming many areas of life. Alas, the world shows little interest in these reforms, and I’ve offered our book The Elephant in the Brain: Hidden Motives in Everyday Life, as a partial explanation: most reforms are designed to give us more of what we say we want, and at some level we know we really want something else. While social design scholars would do better to work more on satisfying hidden motives, there’s still much useful in what they’ve already learned.

Oddly, most people who say they are interested in radical social change don’t study this literature much, and people in this area don’t much consider radical change. Which seems a shame; these tools are a good foundation for such efforts, and the topic of radical change has long attracted wide interest. I’ve tried to apply these tools to consider big change, such as with my futarchy proposal.

I’m pleased to report that two experts in social design have a new book, Radical Markets: Uprooting Capitalism and Democracy for a Just Society:

The book reveals bold new ways to organize markets for the good of everyone. It shows how the emancipatory force of genuinely open, free, and competitive markets can reawaken the dormant nineteenth-century spirit of liberal reform and lead to greater equality, prosperity, and cooperation. … Only by radically expanding the scope of markets can we reduce inequality, restore robust economic growth, and resolve political conflicts. But to do that, we must replace our most sacred institutions with truly free and open competition—Radical Markets shows how.

While I applaud the ambition of the book, and hope to see more like it, the five big proposals of the book vary widely in quality. They put their best feet forward, and it goes downhill from there. Continue reading "Radical Markets" »

GD Star Rating
loading...
Tagged as: , , ,

Skip Value Signals

Consider the following two polls I recently held on Twitter:

As writers, these respondents think that readers won’t engage their arguments for factual claims on a policy relevant topics unless shown that the author shares the values of their particular political faction. But as readers they think they need no signal of shared values to convince them to engage such an argument. If these readers and writers are the same group, then they believe themselves to be hypocritical. They uphold an ideal that value signals should not be needed, but they do not live up to this ideal.

This seems to me part of a larger ideal worth supporting. The ideal is of a community of conversation where everything is open for discussion, people write directly and literally, and people respond mostly analytically to the direct and literal meanings of what people say. People make direct claims and explicit arguments, and refer to dictionaries for disputes about words mean. There’s little need for or acceptance of discussion of what people really meant, and any such claims are backed up by direct explicit arguments based on what people actually and directly said. Even when you believe there is subtext, your text should respond to their text, not to their subtext. Autists may be especially at home in such a community, but many others can find a congenial home there.

A simple way to promote these norms is to skip value signals. Just make your claims, but avoid adding extra signals of shared values. If people who respond leap to the conclusion that you must hold opposing values, calmly correct them, pointing out that you neither said nor implied such a thing. Have your future behavior remain consistent with that specific claim, and with the larger claim that you follow these norms. Within a context, the more who do this, and the more who support them, then the more reluctant others will become to publicly accuse people of saying things that they did not directly say. Especially due to missing value signals.

Of course this is unlikely to become the norm in all human conversation. But it can be the norm within particular intellectual communities. Being a tenured professor who has and needs little in the way of grants or other institutional support, I am in an especially strong position to take such a stance, to promote these norms in my conversation contexts. To make it a bit easier for others to follow. And so I do. You are welcome.

GD Star Rating
loading...
Tagged as: ,

Why Economics Is, And Should Be, Creepy

Hostile questioners tried to trap Jesus into taking an explicit and dangerous stand on whether Jews should or should not pay taxes to the Roman authorities. … Jesus first called them hypocrites, and then asked one of them to produce a Roman coin that would be suitable for paying Caesar’s tax. One of them showed him a Roman coin, and he asked them whose head and inscription were on it. They answered, “Caesar’s,” and he responded: “Render therefore unto Caesar the things which are Caesar’s; and unto God the things that are God’s”. (more)

Long ago, Jesus avoided political entanglements by appealing to a key distinction long made between “official” worlds like work, commerce, war, governance, and law, and “personal” worlds like friends, lovers, parenting, hobbies, religion, conversation, and art. Economists have long been identified with that official world, of work and money and material things. But over the last century economists have increasingly moved outside that official world, looking at mating, conversation, and much more. This has often irritated academics who study personal worlds; they’ve seen economists as having “imperialist” ambitions to “conquer” other academic areas.

Economists studying personal worlds have also bothered a public that hears of economic concepts applied to personal worlds, but using words originally associated with official worlds.  For example, “marriage markets”, “dollar value of a life”, “price of fame,” “below optimal crime”, or my recent “sex redistribution”. This can seem to violate common norms separating official and personal worlds, which I’ll call “world norms”, such as that money should stay out of friendship, or governments stay out of conversation. And this can make economics seem “creepy.” Continue reading "Why Economics Is, And Should Be, Creepy" »

GD Star Rating
loading...
Tagged as: