# Tag Archives: Future

## Three Futures

Recently, 1539 people responded to this poll:

This pattern looks intriguingly bimodal. Are there in some sense two essentially different stories about the future? So I did a more detailed poll. Though only ~95 responded (thank you!), that is enough to reveal an apparently trimodal pattern:

Respondents were asked to estimate the number of future creatures that most people today would call “human”, and also the number who would likely call themselves “human”, even if we today might disagree. The y-axis here is in terms of log10 units. In those units, world population today is 9.89 and ~11.03 humans have ever lived. So the highest number here, 20, is larger than the square of the number alive today.

As you can see, respondents expect a lot more future creatures who call themselves “human”, relative to creatures we would call “human”. And substantial fractions seem to insist that these numbers are higher than any specific number you might mention (20 here). Among the rest, the most popular answer is the 11-12 range (i.e., 0.1-1 trillion “humans”). Note that this can’t be due to a belief that we face huge risks over the next few centuries; that belief suggests the answer <11.

When I set aside the highest (>20) response, and fit a mixture of two lognormals to the rest of each response distribution, I find that regarding creatures we would call “human”, 50.1% of weight goes to a median estimate of 11.9, with (in log10 units) a sigma variation of only 0.22 around that median, 39% of weight to an estimate 13.4, with a much larger sigma of 2.5, and 11% weight to >20, i.e., very high. Regarding creatures who call themselves “human”,  a 45% weight is on estimate 12.0 with sigma 1.2, a 30% weight is on estimate 16.6 with sigma 2.0, and 25% weight on >20. (Such a lognormal mix fit to the first 4 option poll gives roughly consistent results: medians of 11.6, 15.4 with 61% weight on the low estimate, when both are forced to have the same sigma of 0.75.)

Thus, responses seem to reflect either three discrete categories of future scenarios, or three styles of analysis:

1. ~1/2 say there will only ever be ~10x as many humans as there have been (~100x as many as living now), most all creatures who we’d call “human”. Then it all ends.
2. ~1/4 say our descendants go on to much larger but still limited populations. There are ~300x as many humans as have ever lived, and ~1000x that many weirder creatures, though estimates here range quite widely, over ~4 factors of 10 (i.e., “orders of magnitude”).
3. ~1/4 say our descendants grow much more, beyond squaring the number who have ever lived. Probably far far beyond. But ~1/2 of these expect that few of these creatures will be ones most of us would call “human”.

The big question: does this trimodal distribution result from a real discreteness in our actual futures and the risks we will face there, or does it mostly reflect different psychological stances toward the future?

GD Star Rating
Tagged as: ,

## Unending Winter Is Coming

Toward the end of the TV series Game of Thrones, a big long (multi-year) winter was coming, and while everyone should have been saving up for it, they were instead spending lots to fight wars. Because when others spend on war, that forces you to spend on war, and then suffer a terrible winter. The long term future of the universe may be much like this, except that future winter will never end! Let me explain.

The key universal resource is negentropy (and time), from which all others can be gained. For a very long time almost all life has run on the negentropy in sunshine landing on Earth, but almost all of that has been spent in the fierce competition to live. The things that do accumulate, such as innovations embodied in genomes, can’t really be spent to survive. However, as sunlight varies by day and season, life does sometimes save up resources during one part of a cycle, to spend in the other part of a cycle.

Humans have been growing much more rapidly than nature, but we also have had strong competition, and have also mostly only accumulated the resources that can’t directly be spent to win our competitions. We do tend to accumulate capital in peacetime, but every so often we have a big war that burns most of that up. It is mainly our remaining people and innovations that let us rebuild.

Over the long future, our descendants will gradually get better at gaining faster and cheaper access to more resources. Instead of drawing on just the sunlight coming to Earth, we’ll take all light from the Sun, and then we’ll take apart the Sun to make engines that we better control. And so on. Some of us may even gain long term views, that prioritize the very long run.

However, it seems likely that our descendants will be unable to coordinate on universal scales to prevent war and theft. If so, then every so often we will have a huge war, at which point we may burn up most of the resources that can be easily accessed on the timescale of that war. Between such wars, we’d work to increase the rate at which we could access resources during a war. And our need to watch out for possible war will force us to continually spend a non-trivial fraction of our accessible resources watching and staying prepared for war.

The big problem is: the accessible universe is finite, and so we will only ever be able to access a finite amount of negentropy. No matter how much we innovate. While so far we’ve mainly been drawing on a small steady flow of negentropy, eventually we will get better and faster access to the entire stock. The period when we use most of that stock is our universe’s one and only “summer”, after which we face an unending winter. This implies that when a total war shows up, we are at risk of burning up large fractions of all the resources that we can quickly access. So the larger a fraction of the universe’s negentropy that we can quickly access, the larger a fraction of all resources that we will ever have that we will burn up in each total war.

And even between the wars, we will need to watch out and stay prepared for war. If one uses negentropy to do stuff slowly and carefully, then the work that one can do with a given amount of negentropy is typically proportional to the inverse of the rate at which one does that work. This is true for computers, factories, pipes, drag, and much else. So ideally, the way to do the most with a fixed pot of negentropy is to do it all very slowly. And if the universe will last forever, that seems to put no bound on how much we can eventually do.

Alas, given random errors due to cosmic rays and other fluctuations, there is probably a minimum speed for doing the most with some negentropy. So the amount we can eventually do may be big, but it remains finite. However, that optimal pace is probably many orders of magnitude slower than our current speeds, letting our descendants do a lot.

The problem is, descendants who go maximally slow will make themselves very vulnerable to invasion and theft. For an analogy, imagine how severe our site security problems would be today if any one person could temporarily “grow” and become as powerful as a thousand people, but only after a one hour delay. Any one intruder to some site who grew while onsite this could wreck havoc and then be gone within an hour, before local security forces could grow to respond. Similarly when most future descendants run very slow, one who suddenly chose to run very fast might have a huge outside influence before the others could effectively respond.

So the bottom line is that if war and theft remain possible for our descendants, the rate at which they do things will be much faster than the much slower most efficient speed. In order to adequately watch out for and respond to attacks, they will have to run fast, and thus more quickly use up their available stocks of resources, such as stars. And when their stocks run out, the future will have run out for them. Like in a Game of Thrones scenario after a long winter war, they would then starve.

Now it is possible that there will be future resources that simply cannot be exploited quickly. Such as perhaps big black holes. In this case some of our descendants could last for a very long time slowly sipping on such supplies. But their activity levels at that point would be much lower than their rates before they used up all the other faster-access resources.

Okay, let’s put this all together into a picture of the long term future. Today we are growing fast, and getting better at accessing more kinds of resources faster. Eventually our growth in resource use will reach a peak. At that point we will use resources much faster than today, and also much faster than what would be the most efficient rate if we could all coordinate to prevent war and theft. Maybe a billion times faster or more. Fearing war, we will keep spending to watch and prepare for war, and then every once in a while we would burn up most accessible resources in a big war. After using up faster access resources, we then switch to lower activity levels using resources that we just can’t extract as fast, no matter how clever we are. Then we use up each one of those much faster than optimal, with activity levels falling after each source is used up.

That is, unless we can prevent war and theft, our long term future is an unending winter, wherein we use up most of our resources in early winter wars, and then slowly die and shrink and slow and war as the winter continues, on to infinity. And as a result do much less than we could have otherwise; perhaps a billion times less or more. (Thought still vastly more than we have done so far.) And this is all if we are lucky enough to avoid existential risk, which might destroy it all prematurely, leading instead to a fully-dead empty eternity.

Happy holidays.

GD Star Rating
Tagged as: , ,

## How To Prep For War

In my last two posts I’ve noted while war deaths have fallen greatly since the world wars, the magnitude and duration of this fall isn’t that far out of line with previous falls over the last four centuries, falls that have always been followed by rises, as part of a regular cycle of war. I also noted that the theory arguments that have been offered to explain why this trend will long continue, in a deviation from the historical pattern, seem weak. Thus there seems to be a substantial and neglected chance of a lot more war in the next century. I’m not the only one who says this; so do many war experts.

If a lot more war is coming, what should you do personally, to help yourself, your family, and your friends? (Assuming your goal is mainly to personally survive and prosper.) While we can’t say that much specifically about future war’s style, timing, or participants, we know enough to suggest some general advice.

1. Over the last century most war deaths have not been battle deaths, and the battle death share has fallen. Thus you should worry less about dying in battle, and more about other ways to die.

2. War tends to cause the most harm near where its battles happen, and near concentrations of supporting industrial and human production. This means you are more at risk if you live near the nations that participate in the war, and in those nations near dense concentrations and travel routes, that is, near major cities and roads.

3. If there are big pandemics or economic collapse, you may be better off in more isolated and economically self-sufficient places. (That doesn’t include outer space, which is quite unlikely to be economically self-sufficient anytime soon.) Of course there is a big tradeoff here, as these are the places we expect to do less well in the absence of war.

4. Most of your expected deaths may happen in scenarios where nukes are used. There’s a big literature on how to prepare for and avoid harms from nukes, so I’ll just refer you to that. Ironically, you may be more at risk from being hurt by nukes in places that have nukes to retaliate with. But you might be more at risk from being enslaved or otherwise dominated if your place doesn’t have nukes.

5. Most of our computer systems have poor security, and so are poorly protected against cyberwar. This is mainly because software firms are usually more eager to be first to market than to add security, which most customers don’t notice at first. If this situation doesn’t change much, then you should be wary of depending too much on standard connected computer systems. For essential services, rely on disconnected, non-standard, or high-security-investment systems.

6. Big wars tend to induce a lot more taxation of the rich, to pay for wars. So have your dynasty invest more in having more children, relative to fewer richer kids, or invest in assets that are hidden from tax authorities. Or less bother to invest for the long run.

7. The biggest wars so far, the world wars and the thirty years war, have been driven by strong ideologies, such as communism and catholicism. So help your descendants avoid succumbing to strong ideologies, while also avoiding the appearance of publicly opposing locally popular versions. And try to stay away from places that seem more likely to succumb.

8. While old ideologies still have plenty of fire, the big new ideology on the block seems related to woke identity. While this seems to inspire sufficiently confident passions for war, it seems far from clear who would fight who and how in a woke war. This scenario seems worth more thought.

9. If big governance changes and social destruction are coming, that may create opportunities for the adoption of more radical social reforms. And that can encourage us to work more on developing such reforms today.

GD Star Rating
Tagged as: ,

## Big War Remains Possible

The following poll suggests that a majority of my Twitter followers think war will decline; in the next 80 years we won’t see a 15 year period with a war death rate above the median level we’ve see over the last four centuries:

To predict a big deviation from the simple historical trend, one needs some sort of basis in theory. Alas, the theory arguments that I’ve heard re war optimism seem quite inadequate. I thus suspect much wishful thinking here.

For example, some say the world economy today is too interdependent for war. But interdependent economies have long gone to war. Consider the world wars in Europe, or the American civil war. Some say that we don’t risk war because it is very destructive of complex fragile physical capital and infrastructure. But while such capital was indeed destroyed during the world wars, the places most hurt rebounded quickly, as they had good institutional and human capital.

Some note that international alliances make war less likely between alliance partners. But they make war more likely between alliances. Some suggest that better info tells us more about rivals today, and so we are less likely to misjudge rival abilities and motives. But there still seems plenty of room for errors here as “brinkmanship” is a key dynamic. Also, this doesn’t prevent powers from investing in war abilities to gain advantages via credible threats of war.

Some point to a reduced willingness by winners to gain concrete advantages via the ancient strategies of raping and enslaving losers, and demanding great tribute. But we still manage to find many other motives for war, and there’s no fundamental obstacles to reviving ancient strategies; tribute is still quite feasible, as is slavery. Also, the peak war periods so far have been associated with ideology battles, and we still have plenty of those.

Some say nuclear weapons have made small wars harder. But that is only between pairs of nations both of which have nukes, which isn’t most nation pairs. Pairs of nations with nukes can still fight big wars, there are more such pairs today than before, over 80 years there’s plenty of time for some pair to pick a fight, and nuke wars casualties may be enormous.

I suspect that many are relying on modern propaganda on our moral superiority over our ancestors. But while we mostly count humans of the mid twentieth century as morally superior to humans from prior centuries, that was the period of peak war mortality.

I also suspect that many are drawing conclusions about war from long term trends regarding other forms of violence, as in slavery, crime, and personal relations, as well as from apparently lower public tolerance for war deaths and overall apparent disapproval and reluctance regarding war. But just before World War I we had also seen such trends:

Then, as now, Europe had lived through a long period of relative peace, … rapid progress … had given humanity a sense of shared interests that precluded war, … world leaders scarcely believed a global conflagration was possible. (more)

The world is vast, eighty years is a long time, and the number of possible global social & diplomatic scenarios over such period is vast. So it seems crazy to base predictions on future war rates on inside view calculations from particular current stances, deals, or inclinations. The raw historical record, and its large long-term fluctuations, should weigh heavily on our minds.

GD Star Rating
Tagged as: ,

## Why Age of Em Will Happen

In some technology competitions, winners dominate strongly. For example, while gravel may cover a lot of roads if we count by surface area, if we weigh by vehicle miles traveled then asphalt strongly dominates as a road material. Also, while some buildings are cooled via fans and very thick walls, the vast majority of buildings in rich and hot places use air-conditioning. In addition, current versions of software systems also tend to dominate over old older versions. (E.g., Windows 10 over Windows 8.)

However, in many other technology competitions, older technologies remain widely used over long periods. Cities were invented ten thousand years ago, yet today only about half of the population lives in them. Cars, trains, boats, and planes have taken over much transportation, yet we still do plenty of walking. Steel has replaced wood in many structures, yet wood is still widely used. Fur, wool, and cotton aren’t used as often as they once were, but they are still quite common as clothing materials. E-books are now quite popular, but paper books sales are still growing.

Whether or not an old tech still retains wide areas of substantial use depends on the average advantage of the new tech, relative to the variation of that advantage across the environments where these techs are used, and the variation within each tech category. All else equal, the wider the range of environments, and the more diverse is each tech category, the longer that old tech should remain in wide use.

For example, compare the set of techs that start with the letter A (like asphalt) to the set that start with the letter G (like gravel). As these are relatively arbitrary sets that do not “cut nature at its joints”, there is wide diversity within each category, and each set is all applied to a wide range of environments. This makes it quite unlikely that one of these sets will strongly dominate the other.

Note that techs that tend to dominate strongly, like asphalt, air-conditioning, and new software versions, more often appear as a lumpy change, e.g., all at once, rather than via a slow accumulation of many changes. That is, they more often result from one or a few key innovations, and have some simple essential commonality. In contrast, techs that have more internal variety and structure tend more to result from the accumulation of more smaller innovations.

Now consider the competition between humans and computers for mental work. Today human brains earn more than half of world income, far more than the costs of computer hardware and software. But over time, artificial hardware and software have been improving, and slowly commanding larger fractions. Eventually this could become a majority. And a key question is then: how quickly might computers come to dominate overwhelmingly, doing virtually all mental work?

On the one hand, the ranges here are truly enormous. We are talking about all mental work, which covers a very wide of environments. And not only do humans vary widely in abilities and inclinations, but computer systems seem to encompass an even wider range of designs and approaches. And many of these are quite complex systems. These facts together suggest that the older tech of human brains could last quite a long time (relative of course to relevant timescales) after computers came to do the majority of tasks (weighted by income), and that the change over that period could be relatively gradual.

For an analogy, consider the space of all possible non-mental work. While machines have surely been displacing humans for a long time in this area, we still do many important tasks “by hand”, and overall change has been pretty steady for a long time period. This change looked nothing like a single “general” machine taking over all the non-mental tasks all at once.

On the other hand, human minds are today stuck in old bio hardware that isn’t improving much, while artificial computer hardware has long been improving rapidly. Both these states, of hardware being stuck and improving fast, have been relatively uniform within each category and across environments. As a result, this hardware advantage might plausibly overwhelm software variety to make humans quickly lose most everywhere.

However, eventually brain emulations (i.e. “ems”) should be possible, after which artificial software would no longer have a hardware advantage over brain software; they would both have access to the same hardware. (As ems are an all-or-nothing tech that quite closely substitutes for humans and yet can have a huge hardware advantage, ems should displace most all humans over a short period.) At that point, the broad variety of mental task environments, and of approaches to both artificial and em software, suggests that ems many well stay competitive on many job tasks, and that this status might last a long time, with change being gradual.

Note also that as ems should soon become much cheaper than humans, the introduction of ems should initially cause a big reversion, wherein ems take back many of the mental job tasks that humans had recently lost to computers.

In January I posted a theoretical account that adds to this expectation. It explains why we should expect brain software to be a marvel of integration and abstraction, relative to the stronger reliance on modularity that we see in artificial software, a reliance that allows those systems to be smaller and faster built, but also causes them to rot faster. This account suggests that for a long time it would take unrealistically large investments for artificial software to learn to be as good as brain software on the tasks where brains excel.

A contrary view often expressed is that at some point someone will “invent” AGI (= Artificial General Intelligence). Not that society will eventually have broadly capable and thus general systems as a result of the world economy slowly collecting many specific tools and abilities over a long time. But that instead a particular research team somewhere will discover one or a few key insights that allow that team to quickly create a system that can do most all mental tasks much better than all the other systems, both human and artificial, in the world at that moment. This insight might quickly spread to other teams, or it might be hoarded to give this team great relative power.

Yes, under this sort of scenario it becomes more plausible that artificial software will either quickly displace humans on most all jobs, or do the same to ems if they exist at the time. But it is this scenario that I have repeatedly argued is pretty crazy. (Not impossible, but crazy enough that only a small minority should assume or explore it.) While the lumpiness of innovation that we’ve seen so far in computer science has been modest and not out of line with most other research fields, this crazy view postulates an enormously lumpy innovation, far out of line with anything we’ve seen in a long while. We have no good reason to believe that such a thing is at all likely.

If we presume that no one team will ever invent AGI, it becomes far more plausible that there will still be plenty of jobs tasks for ems to do, whenever ems show up. Even if working ems only collect 10% of world income soon after ems appear, the scenario I laid out in my book Age of Em is still pretty relevant. That scenario is actually pretty robust to such variations. As a result of thinking about these considerations, I’m now much more confident that the Age of Em will happen.

In Age of Em, I said:

Conditional on my key assumptions, I expect at least 30 percent of future situations to be usefully informed by my analysis. Unconditionally, I expect at least 5 percent.

I now estimate an unconditional 80% chance of it being a useful guide, and so will happily take bets based on a 50-50 chance estimate. My claim is something like:

Within the first D econ doublings after ems are as cheap as the median human worker, there will be a period where >X% of world income is paid for em work. And during that period Age of Em will be a useful guide to that world.

Note that this analysis suggests that while the arrival of ems might cause a relatively sudden and disruptive transition, the improvement of other artificial software would likely be more gradual. While overall rates of growth and change should increase as a larger fraction of the means of production comes to be made in factories, the risk is low of a sudden AI advance relative to that overall rate of change. Those concerned about risks caused by AI changes can more reasonably wait until we see clearer signs of problems.

GD Star Rating
Tagged as: , , ,

## Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
Tagged as: ,

## How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
Tagged as: , ,

Over the last day on Twitter, I ran three similar polls. One asked:

Software design today faces many tradeoffs, e.g., getting more X costs less Y, or vice versa. By comparison, will distant future tradeoffs be mostly same ones, about as many but very different ones, far fewer (so usually all good features X,Y are feasible together), or far more?

Four answers were possible: mostly same tradeoffs, as many but mostly new, far fewer tradeoffs, and far more tradeoffs. The other two polls replaced “Software” with “Physical Device” and “Social Institution.”

I now see these four answers as picking out four future scenarios. A world with fewer tradeoffs is Utopian, where you can more get everything you want without having to give up other things. In contrast, a world with many more tradeoffs is more Complex. A world where most of the tradeoffs are like those today is Familiar. And a world where the current tradeoffs are replaced by new ones is Radical.  Using these terms, here are the resulting percentages:

The polls got from 105 to 131 responses each, with an average entry percentage of 25%, so I’m willing to believe differences of 10% or more. The most obvious results here are that only a minority foresee a familiar future in any area, and answers vary greatly; there is little consensus on which scenarios are more likely.

Beyond that, the strongest pattern I see is that respondents foresee more complexity, relative to a utopian lack of tradeoffs, at higher levels of organization. Physical devices are the most utopian, social institutions are the most complex, and software sits in the middle. The other possible result I see is that respondents foresee a less familiar social future.

Which shapes the world more in the long run: the search for arrangements allowing better compromises regarding many complex tradeoffs, or fights between conflicting groups/values/perspectives?

In response, 43% said search for tradeoffs while 30% said value conflicts, and 27% said hard to tell. So these people see tradeoffs as mattering a lot.

These respondents seriously disagree with science fiction, which usually describes relatively familiar social worlds in visibly changed physical contexts (and can’t be bothered to have an opinion on software). They instead say that the social world will change the most, becoming the most complex and/or radical. Oh brave new world, that has such institutions in it!

GD Star Rating
Tagged as: ,

## How Does Brain Code Differ?

The Question

We humans have been writing “code” for many decades now, and as “software eats the world” we will write a lot more. In addition, we can also think of the structures within each human brain as “code”, code that will also shape the future.

Today the code in our heads (and bodies) is stuck there, but eventually we will find ways to move this code to artificial hardware. At which point we can create the world of brain emulations that is the subject of my first book, Age of Em. From that point on, these two categories of code, and their descendant variations, will have near equal access to artificial hardware, and so will compete on relatively equal terms to take on many code roles. System designers will have to choose which kind of code to use to control each particular system.

When designers choose between different types of code, they must ask themselves: which kinds of code are more cost-effective in which kinds of applications? In a competitive future world, the answer to this question may be the main factor that decides the fraction of resources devoted to running human-like minds. So to help us envision such a competitive future, we should also ask: where will different kinds of code work better? (Yes, non-competitive futures may be possible, but harder to arrange than many imagine.)

To think about which kinds of code win where, we need a basic theory that explains their key fundamental differences. You might have thought that much has been written on this, but alas I can’t find much. I do sometimes come across people who think it obvious that human brain code can’t possibly compete well anywhere, though they rarely explain their reasoning much. As this claim isn’t obvious to me, I’ve been trying to think about this key question of which kinds of code wins where. In the following, I’ll outline what I’ve come up with. But I still hope someone will point me to useful analyses that I’ve missed.

In the following, I will first summarize a few simple differences between human brain code and other code, then offer a deeper account of these differences, then suggest an empirical test of this account, and finally consider what these differences suggest for which kinds of code will be more cost-effective where. Continue reading "How Does Brain Code Differ?" »

GD Star Rating
Tagged as: , , ,

## Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible.

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating