Tag Archives: Future

Aliens Need Not Wait To Be Active

In April 2017, Anders Sandberg, Stuart Armstrong, and Milan Cirkovic released this paper:

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: This can produce a 1030 multiplier of achievable computation. We hence suggest the “aestivation hypothesis”: The reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyses the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis. (more)

That is, they say that if you have a resource (like a raised weight, charged battery, or tank of gas), you can get at lot (~1030 times!) more computing steps out of that if you don’t use it  today, but instead wait until the cosmological background temperature is very low. So, they say, there may be lots of aliens out there, all quiet and waiting to be active later.

Their paper was published in JBIS in a few months later, their theory now has its own wikipedia page, and they have attracted at least 15 news articles (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15). Problem is, they get the physics of computation wrong. Or so says physics-of-computation pioneer Charles Bennett, quantum-info physicist Jess Riedel, and myself, in our new paper:

In their article, ‘That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi’s paradox’, Sandberg et al. try to explain the Fermi paradox (we see no aliens) by claiming that Landauer’s principle implies that a civilization can in principle perform far more (∼1030 times more) irreversible logical operations (e.g., error-correcting bit erasures) if it conserves its resources until the distant future when the cos- mic background temperature is very low. So perhaps aliens are out there, but quietly waiting.

Sandberg et al. implicitly assume, however, that computer-generated entropy can only be disposed of by transferring it to the cosmological background. In fact, while this assumption may apply in the distant future, our universe today contains vast reservoirs and other physical systems in non-maximal entropy states, and computer-generated entropy can be transferred to them at the adiabatic conversion rate of one bit of negentropy to erase one bit of error. This can be done at any time, and is not improved by waiting for a low cosmic background temperature. Thus aliens need not wait to be active. As Sandberg et al. do not provide a concrete model of the effect they assert, we construct one and show where their informal argument goes wrong. (more)

That is, the key resource is negentropy, and if you have some of that you can use it at anytime to correct computing-generated bit errors at the constant ideal rate of one bit of negentropy per one bit of error corrected. There is no advantage in waiting until the distant future to do this.

Now you might try to collect negentropy by running an engine on the temperature difference between some local physical system that you control and the distant cosmological background. And yes, that process may go better if you wait until the background gets colder. (And that process can be very slow.) But the negentropy that you already have around you now, you can use that at anytime without any penalty for early withdrawal.

There’s also (as I discuss in Age of Em) an advantage in running your computers more slowly; the negentropy cost per gate operation is roughly inverse to the time you allow for that operation. So aliens might want to run slow. But even for this purpose they should want to start that activity as soon as possible. Defensive consideration also suggest that they’d need to maintain substantial activity to watch for and be ready to respond to attacks.

GD Star Rating
Tagged as: ,

How Lumpy AI Services?

Long ago people like Marx and Engels predicted that the familiar capitalist economy would naturally lead to the immiseration of workers, huge wealth inequality, and a strong concentration of firms. Each industry would be dominated by a main monopolist, and these monsters would merge into a few big firms that basically run, and ruin, everything. (This is somewhat analogous to common expectations that military conflicts naturally result in one empire ruling the world.)

Many intellectuals and ordinary people found such views quite plausible then, and still do; these are the concerns most often voiced to justify redistribution and regulation. Wealth inequality is said to be bad for social and political health, and big firms are said to be bad for the economy, workers, and consumers, especially if they are not loyal to our nation, or if they coordinate behind the scenes.

Note that many people seem much less concerned about an economy full of small firms populated by people of nearly equal wealth. Actions seem more visible in such a world, and better constrained by competition. With a few big privately-coordinating firms, in contrast, who knows that they could get up to, and they seem to have so many possible ways to screw us. Many people either want these big firms broken up, or heavily constrained by presumed-friendly regulators.

In the area of AI risk, many express great concern that the world may be taken over by a few big powerful AGI (artificial general intelligence) agents with opaque beliefs and values, who might arise suddenly via a fast local “foom” self-improvement process centered on one initially small system. I’ve argued in the past that such sudden local foom seems unlikely because innovation is rarely that lumpy.

In a new book-length technical report, Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Eric Drexler makes a somewhat similar anti-lumpiness argument. But he talks about task lumpiness, not innovation lumpiness. Powerful AI is safer if it is broken into many specific services, often supplied by separate firms. The task that each service achieves has a narrow enough scope that there’s little risk of it taking over the world and killing everyone in order to achieve that task. In particular, the service of being competent at a task is separate from the service of learning how to become competent at that task. In Drexler’s words: Continue reading "How Lumpy AI Services?" »

GD Star Rating
Tagged as: , ,

Distant Future Tradeoffs

Over the last day on Twitter, I ran three similar polls. One asked:

Software design today faces many tradeoffs, e.g., getting more X costs less Y, or vice versa. By comparison, will distant future tradeoffs be mostly same ones, about as many but very different ones, far fewer (so usually all good features X,Y are feasible together), or far more?

Four answers were possible: mostly same tradeoffs, as many but mostly new, far fewer tradeoffs, and far more tradeoffs. The other two polls replaced “Software” with “Physical Device” and “Social Institution.”

I now see these four answers as picking out four future scenarios. A world with fewer tradeoffs is Utopian, where you can more get everything you want without having to give up other things. In contrast, a world with many more tradeoffs is more Complex. A world where most of the tradeoffs are like those today is Familiar. And a world where the current tradeoffs are replaced by new ones is Radical.  Using these terms, here are the resulting percentages:

The polls got from 105 to 131 responses each, with an average entry percentage of 25%, so I’m willing to believe differences of 10% or more. The most obvious results here are that only a minority foresee a familiar future in any area, and answers vary greatly; there is little consensus on which scenarios are more likely.

Beyond that, the strongest pattern I see is that respondents foresee more complexity, relative to a utopian lack of tradeoffs, at higher levels of organization. Physical devices are the most utopian, social institutions are the most complex, and software sits in the middle. The other possible result I see is that respondents foresee a less familiar social future. 

I also asked:

Which shapes the world more in the long run: the search for arrangements allowing better compromises regarding many complex tradeoffs, or fights between conflicting groups/values/perspectives?

In response, 43% said search for tradeoffs while 30% said value conflicts, and 27% said hard to tell. So these people see tradeoffs as mattering a lot.  

These respondents seriously disagree with science fiction, which usually describes relatively familiar social worlds in visibly changed physical contexts (and can’t be bothered to have an opinion on software). They instead say that the social world will change the most, becoming the most complex and/or radical. Oh brave new world, that has such institutions in it!

GD Star Rating
Tagged as: ,

How Does Brain Code Differ?

The Question

We humans have been writing “code” for many decades now, and as “software eats the world” we will write a lot more. In addition, we can also think of the structures within each human brain as “code”, code that will also shape the future.

Today the code in our heads (and bodies) is stuck there, but eventually we will find ways to move this code to artificial hardware. At which point we can create the world of brain emulations that is the subject of my first book, Age of Em. From that point on, these two categories of code, and their descendant variations, will have near equal access to artificial hardware, and so will compete on relatively equal terms to take on many code roles. System designers will have to choose which kind of code to use to control each particular system.

When designers choose between different types of code, they must ask themselves: which kinds of code are more cost-effective in which kinds of applications? In a competitive future world, the answer to this question may be the main factor that decides the fraction of resources devoted to running human-like minds. So to help us envision such a competitive future, we should also ask: where will different kinds of code work better? (Yes, non-competitive futures may be possible, but harder to arrange than many imagine.)

To think about which kinds of code win where, we need a basic theory that explains their key fundamental differences. You might have thought that much has been written on this, but alas I can’t find much. I do sometimes come across people who think it obvious that human brain code can’t possibly compete well anywhere, though they rarely explain their reasoning much. As this claim isn’t obvious to me, I’ve been trying to think about this key question of which kinds of code wins where. In the following, I’ll outline what I’ve come up with. But I still hope someone will point me to useful analyses that I’ve missed.

In the following, I will first summarize a few simple differences between human brain code and other code, then offer a deeper account of these differences, then suggest an empirical test of this account, and finally consider what these differences suggest for which kinds of code will be more cost-effective where. Continue reading "How Does Brain Code Differ?" »

GD Star Rating
Tagged as: , , , ,

Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible. 

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating
Tagged as: , , ,

Perpetual Motion Via Negative Matter?

One of the most important things we will ever learn about the universe is just how big it is, practically, for our purposes. In the last century we’ve learned that it it is far larger than we knew, in a great many ways. At the moment we are pretty sure that it is about 13 billion years old, and that it seems much larger in spatial directions. We have decent estimates for both the total space-time volume we can ever see, and all that we can ever influence.

For each of these volumes, we also have decent estimates of the amount of ordinary matter they contain, how much entropy that now contains, and how much entropy it could create via nuclear reactions. We also have decent estimates of the amount of non-ordinary matter, and of the much larger amount of entropy that matter of all types could produce if collected into black holes.

In addition, we have plausible estimates of how (VERY) long it will take to actually use all that potential entropy. If you recall, matter and volume is what we need to make stuff, and potential entropy, beyond current actual entropy, (also known as “negentropy”) is they key resource needed to drive thus stuff in desired directions. This includes both biological life and artificial machinery.

Probably the thing we most care about doing with all that stuff in the universe this is creating and sustaining minds like ours. We know that this can be done via bodies and brains like ours, but it seems that far more minds could be supported via artificial computer hardware. However, we are pretty uncertain about how much computing power it takes (when done right) to support a mind like ours, and also about how much matter, volume, and entropy it takes (when done right) to produce any given amount of computing power.

For example, in computing theory we don’t even know if P=NP. We think this claim is false, but if true it seems that we can produce vastly more useful computation with any given amount of computing power, which probably means sustaining a lot more minds. Though I know of no concrete estimate of how many more.

It might seem that at least our physics estimates of available potential entropy are less uncertain that this, but I was recently reminded that we actually aren’t even sure that this amount is finite. That is, it might be that our universe has no upper limit to entropy. In which case, one could keep run physical processes (like computers) that increase entropy forever, create proverbial “perpetual motion machines”. Some say that such machines are in conflict with thermodynamics, but that is only true if there’s a maximum entropy.

Yes, there’s a sense in which a spatially infinite universe has infinite entropy, but that’s not useful for running any one machine. Yes, if it were possible to perpetually create “baby universes”, then one might perpetually run a machine that can fit each time into the entrance from one universe into its descendant universe. But that may be a pretty severe machine size limit, and we don’t actually know that baby universes are possible. No, what I have in mind here is the possibility of negative mass, which might allow unbounded entropy even in a finite region of ordinary space-time.

Within the basic equations of Newtonian physics lie the potential for an exotic kind of matter: negative mass. Just let the mass of some particles be negative, and you’ll see that gravitationally the negative masses push away from each other, but are drawn toward the positive masses, which are drawn toward each other. Other forces can exist too, and in terms of dynamics, it’s all perfectly consistent.

Now today we formally attribute the Casimir effect to spatial regions filled with negative mass/energy, and we sometimes formally treat the absence of a material as another material (think of bubbles in water), and these often formally have negative mass. But other than these, we’ve so far not seen any material up close that acts locally like it has negative mass, and this has been a fine reason to ignore the possibility.

However, we’ve known for a while now that over 95% of the universe seems to be made of unknown stuff that we’ve never seen interact with any of the stuff around us, except via long distance gravity interactions. And most of that stuff seems to be a “dark energy” which can be thought of as having a negative mass/energy density. So negative mass particles seem a reasonable candidate to consider for this strange stuff. And the reason I thought about this possibility recently is that I came across this article by Jamie Farnes, and associated commentary. Farnes suggests negative mass particles may fill voids between galaxies, and crowd around galaxies compacting them, simultaneously explaining galaxy rotation curves and accelerating cosmic expansion.

Apparently, Einstein considered invoking negative mass particles to explain (what he thought was) the observed lack of cosmic expansion, before he switched to a more abstract explanation, which he dropped after cosmic expansion was observed. Some say that Farnes’s attempt to integrate negative mass into general relative and quantum particle physics fails, and I have no opinion on that. Here I’ll just focus on simpler physics considerations, and presume that there must be some reasonable way to extend the concept of negative mass particles in those directions.

One of the first things one usually learns about negative mass is what happens in the simple scenario wherein two particles with exactly equal and opposite masses start off exactly at rest relative to one another, and have any force between them. In this scenario, these two particles accelerate together in the same direction, staying at the same relative distance, forevermore. This produces arbitrarily large velocities in simple Newtonian physics, and arbitrarily larger absolute masses in relativistic physics. This seems a crazy result, and it probably put me off from of the negative mass idea when I first heard about it.

But this turns out to be an extremely unusual scenario for negative mass particles. Farnes did many computer simulations with thousands of gravitationally interacting negative and positive mass particles of exactly equal mass magnitudes. These simulations consistently “reach dynamic equilibrium” and “no runaway particles were detected”. So as a matter of practice, runaway seems quite rare, at least via gravity.

A related worry is that if there were a substantial coupling associated with making pairs of positive and negative mass particles that together satisfy relative conservation laws, such pairs would be created often, leading to a rapid and apparently unending expansion in total particle number. But the whole idea of dark stuff is that it only couples very weakly to ordinary matter. So if we are to explain dark stuff via negative mass particles, we can and should postulate no strong couplings that allow easy creation of pairs of positive and negative mass particles.

However, even if the postulate of negative mass particles were consistent with all of our observations of a stable pretty-empty universe (and of course that’s still a big if), the runaway mass pair scenario does at least weakly suggest that entropy may have no upper bound when negative masses are included. The stability we observe only suggests that current equilibrium is “metastable” in the sense of not quickly changing.

Metastability is already known to hold for black holes; merging available matter into a few huge black holes could vastly increase entropy, but that only happens naturally at a very slow rate. By making it happen faster, our descendants might greatly increase their currently available potential entropy. Similarly, our descendants might gain even more potential entropy by inducing interactions between mass and negative mass that would naturally be very rare.

That is, we don’t even know if potential entropy is finite, even within a finite volume. Learning that will be very big news, for good or bad.

GD Star Rating
Tagged as: ,

The Aristillus Series

There’s a contradiction at the heart of science fiction. Science fiction tends to celebrate the engineers and other techies who are its main fans. But there are two conflicting ways to do this. One is to fill a story with credible technical details, details that matter to the plot, and celebrate characters who manage this detail well. The other approach is to present tech as the main cause of an impressive future world, and of big pivotal events in that world.

The conflict comes from it being hard to give credible technical details about an impressive future world, as we don’t know much about future tech. One can give lots of detail about current tech, but people aren’t very impressed with the world they live in (though they should be). Or one can make up detail about future tech, but that detail isn’t very credible.

A clever way to mitigate this conflict is to introduce one dramatic new tech, and then leave all other tech the same. (Vinge gave a classic example.) Here, readers can be impressed by how big a difference one new tech could make, and yet still revel in heroes who win in part by mastering familiar tech detail. Also, people like me who like to think about the social implications of tech can enjoy a relatively manageable task: guess how one big new tech would change an otherwise familiar world.

I recently enjoyed the science fiction book pair The Aristillus Series: Powers of the Earth, and Causes of Separation, by Travis J I Corcoran (@MorlockP), funded in part via Kickstarter, because it in part followed this strategy. Also, it depicts betting markets as playing a small part in spreading info about war details. In addition, while most novels push some sort of unrealistic moral theme, the theme here is at least relatively congenial to me: nice libertarians seek independence from a mean over-regulated Earth:

Earth in 2064 is politically corrupt and in economic decline. The Long Depression has dragged on for 56 years, and the Bureau of Sustainable Research is making sure that no new technologies disrupt the planned economy. Ten years ago a band of malcontents, dreamers, and libertarian radicals used a privately developed anti-gravity drive to equip obsolete and rusting sea-going cargo ships – and flew them to the moon.There, using real world tunnel-boring-machines and earth-moving equipment, they’ve built their own retreat.

The one big new tech here is anti-gravity, made cheaply from ordinary materials and constructible by ordinary people with common tools. One team figures it out, and for a long time no other team has any idea how to do it, or any remotely similar tech, and no one tries to improve it; it just is.

Attaching antigrav devices to simple refitted ocean-going ships, our heroes travel to the moon, set up a colony, and create a smuggling ring to transport people and stuff to there. Aside from those magic antigravity devices, these books are choc full of technical mastery of familiar tech not much beyond our level, like tunnel diggers, guns, space suits, bikes, rovers, crypto signatures, and computers software. These are shown to have awkward gritty tradeoffs, like most real tech does.

Alas, Corcoran messes this up a bit by adding two more magic techs: one superintelligent AI, and a few dozen smarter-than-human dogs. Oh and the same small group is implausibly responsible for saving all three magic techs from destruction. As with antigravity, in each case one team figures it out, no other team has any remotely similar tech, and no one tries to improve them. But these don’t actually matter that much to the story, and I can hope they will be cut if/when this is made into a movie.

The story begins roughly a decade after the moon colony started, when it has one hundred thousand or a million residents. (I heard conflicting figures at different points.) Compared to Earth folk, colonists are shown as enjoying as much product variety, and a higher standard of living. This is attributed to their lower regulation.

While Earth powers dislike the colony, they are depicted at first as being only rarely able to find and stop smugglers. But a year later, when thousands of ships try to fly to the moon all at once from thousands of secret locations around the planet, Earth powers are depicted as being able to find and shoot down 90% of them. Even though this should be harder when thousands fly at once. This change is never explained.

Even given the advantage of a freer economy, I find it pretty implausible that a colony could be built this big and fast with this level of variety and wealth, all with no funding beyond what colonists can carry. The moon is a long way from Earth, and it is a much harsher environment. For example, while colonists are said to have their own chip industry to avoid regulation embedded in Earth chips, the real chip industry has huge economies of scale that make it quite hard to serve only one million customers.

After they acquire antigrav tech, Earth powers go to war with the moon. As the Earth’s economy is roughly ten thousand times larger that the moon’s, without a huge tech advantage is a mystery why anyone thinks the moon has any chance whatsoever to win this war.

The biggest blunder, however, is that no one in the book imagines using antigrav tech on Earth. But if the cost to ship stuff to the moon using antigrav isn’t crazy high, then antigravity must make it far cheaper to ship stuff around on Earth. Antigrav could also make tall buildings cheaper, allowing much denser city centers. The profits to be gained from these applications seem far larger than from smuggling stuff to a small poor moon colony.

So even if we ignore the AI and smart dogs, this still isn’t a competent extrapolation of what happens if we add cheap antigravity to a world like ours. Which is too bad; that would be an interesting scenario to explore.

Added 5:30p: In the book, antigrav is only used to smuggle stuff to/from moon, until it is used to send armies to the moon. But demand for smuggling should be far larger between places on Earth. In the book thousands of ordinary people are seen willing to make their own antigrav devices to migrate to moon, But a larger number should be making such devices to smuggle stuff around on Earth.

GD Star Rating
Tagged as: , , ,

Stubborn Attachments

Tyler Cowen’s new book, Stubborn Attachments, says many things. But his main claims are, roughly, 1) we should care much more about people who will live in the distant future, and 2) promoting long-run economic growth is a robust way to achieve that end. As a result, we should try much harder to promote long-run economic growth.

Now I don’t actually think his arguments are that persuasive to those inclined to disagree. On 1), the actions of most people suggest that they don’t actually care much about the distant future, and there exist quite consistent preferences (including moral preferences) to represent this position. (Also, I have to wonder how much Tyler cares, as in the 20 years I’ve known him I’ve often worked on distant future issues, and he’s shown almost no interest in such things.)

On 2), while Tyler mainly argues for econ growth by pointing to good trends over the last few centuries, many people see bad trends as outweighing the good, and many others see recent trends as temporary historical deviations. Tyler also doesn’t consider that future techs which speed population growth could cut the connection observed recently between total and per-capita growth; I describe such a scenario in my book Age of Em.

Tyler being Tyler, he is generally vague and gives himself many outs to avoid criticism. For example, he says that rights should take priority over growth, but he doesn’t specify those rights. He says he only advocates growing “wealth plus” which includes any good thing you could want, so don’t complain that growth will hurt a good thing. He notes that the priority on growth can justify the usual intuition excusing limited redistribution, but doesn’t mention that this won’t at all excuse not doing everything possible to promote growth. He says he isn’t committed to econ growth being possible forever, but only to a finite chance of eternal growth. Yet focusing all policy on trying to increase growth within some tiny-chance eternal growth scenario is overwhelmingly likely to seem a huge mistake later.

However, as I personally happen to agree with his main claims, at least the way I phrased them, I’d rather focus on their implications, which Tyler severely neglects. The following are the only “concrete” things he says about how exactly to promote long term econ growth:

For some more concrete recommendations, I’ll suggest the following: a) Policy should be more forward-looking and more concerned about the more distant future. b) Governments should place a much higher priority on investment than is currently the case, in both the private sector and the public sector. … c) Policy should be more concerned with economic growth, properly specified, and policy discussion should pay less heed to other values. … d) We should be more concerned with the fragility of our civilization. … e) We should be more charitable on the whole, but we are not obliged to give away all of our wealth. … f) We can embrace much of common sense morality with the knowledge that it is not inconsistent with a deeper ethical theory. … g) When it comes to most “small” policies affecting the present and the near-present only, we should be agnostic.

More “investment” and “growth”, that’s it?! We actually know of many more specific ways to encourage choices that promote long term growth, but they mostly come at substantial costs. I don’t how much you actually support faster long-term growth until I hear which such policies you’ll support. Continue reading "Stubborn Attachments" »

GD Star Rating
Tagged as: , , ,

On the Future by Rees

In his broad-reaching new book, On the Future, aging famous cosmologist Martin Rees says aging famous scientists too often overreach:

Scientists don’t improve with age—that they ‘burn out’. … There seem to be three destinies for us. First, and most common, is a diminishing focus on research. …

A second pathway, followed by some of the greatest scientists, is an unwise and overconfident diversification into other fields. Those who follow this route are still, in their own eyes, ‘doing science’—they want to understand the world and the cosmos, but they no longer get satisfaction from researching in the traditional piecemeal way: they over-reach themselves, sometimes to the embarrassment of their admirers. This syndrome has been aggravated by the tendency for the eminent and elderly to be shielded from criticism. …

But there is a third way—the most admirable. This is to continue to do what one is competent at, accepting that … one can probably at best aspire to be on a plateau rather than scaling new heights.

Rees says this in a book outside his initial areas of expertise, a book that has gained many high profile fawning uncritical reviews, a book wherein he whizzes past dozens of topics just long enough to state his opinion, but not long enough to offer detailed arguments or analysis in support. He seems oblivious to this parallel, though perhaps he’d argue that the future is not “science” and so doesn’t reward specialized study. As the author of a book that tries to show that careful detailed analysis of the future is quite possible and worthwhile, I of course disagree.

As I’m far from prestigious enough to get away a book like his, let me instead try to get away with a long probably ignored blog post wherein I take issue with many of Rees’ claims. While I of course also agree with much else, I’ll focus on disagreements. I’ll first discuss his factual claims, then his policy/value claims. Quotes are indented; my responses are not.  Continue reading "On the Future by Rees" »

GD Star Rating
Tagged as: ,

Vulnerable World Hypothesis

I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper, The Vulnerable World Hypothesis, fits in this theme:

Consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? … Maybe … ban all research in nuclear physics … [Or] eliminate all glass, metal, or sources of electrical current. … Societies might split into factions waging a civil wars with nuclear weapons, … end only when … nobody is able any longer to put together a bomb … from stored materials or the scrap of city ruins. …

The ​vulnerable world hypothesis​ [VWH] … is that there is some level of technology at which civilization almost certainly gets destroyed unless … civilization sufficiently exits the … world order characterized by … limited capacity for preventive policing​, … limited capacity for global governance.​ … [and] diverse motivations​. … It is ​not​ a primary purpose of this paper to argue VWH is true. …

Four types of civilizational vulnerability. … in the “easy nukes” scenario, it becomes too easy for individuals or small groups to cause mass destruction. … a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. … counterfactual in which a preemptive counterforce [nuclear] strike is more feasible. … the problem of global warming [could] be far more dire … if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlook …

two possible ways of achieving stabilization: Create the capacity for extremely effective preventive policing.​ … and create the capacity for strong global governance. … While some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. …

It goes without saying there are great difficulties, and also very serious potential downsides, in seeking progress towards (a) and (b). In this paper, we will say little about the difficulties and almost nothing about the potential downsides—in part because these are already rather well known and widely appreciated.

I take issue a bit with this last statement. The vast literature on governance shows both many potential advantages of and problems with having more relative to less governance. It is good to try to extend this literature into futuristic considerations, by taking a wider longer term view. But that should include looking for both novel upsides and downsides. It is fine for Bostrom to seek not-yet-appreciated upsides, but we should also seek not-yet-appreciated downsides, such as those I’ve mentioned in two recent posts.

While Bostrom doesn’t in his paper claim that our world is in fact vulnerable, he released his paper at time when many folks in the tech world have been claiming that changing tech is causing our world to in fact become more vulnerable over time to analogies of his “easy nukes” scenario. Such people warn that it is becoming easier for smaller groups and individuals to do more damage to the world via guns, bombs, poison, germs, planes, computer hacking, and financial crashes. And Bostrom’s book Superintelligence can be seen as such a warning. But I’m skeptical, and have yet to see anyone show a data series displaying such a trend for any of these harms.

More generally, I worry that “bad cases make bad law”. Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy. It may be very hard to weigh extreme but unlikely scenarios suggesting more governance against extreme but unlikely scenarios suggesting less governance. Perhaps the best lesson is that we should make it a priority to improve governance capacities, so we can better gain upsides without paying downsides. I’ve been working on this for decades.

I also worry that existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.

Added 8am:

Kevin Kelly in 2012:

The power of an individual to kill others has not increased over time. To restate that: An individual — a person working alone today — can’t kill more people than say someone living 200 or 2,000 years ago.

Anders Sandberg in 2018:

Added 19Nov: Vox quotes from this article.

GD Star Rating
Tagged as: , ,