Tag Archives: Future

Stock Vs. Flow War

When our farmer ancestors warred, they often went about as far as they could to apply all available resources to their war efforts. This included converting plowshares into swords, ships into navies, farmers into soldiers, granaries into soldiers on the move, good will into allies, and cash into foreign purchases. When wars went long and badly, such resources were often quite depleted by the end. Yet warring farmers only rarely went extinct. Why?

The distinction between stock and flow is a basic one in engineering and finance. Stocks allow flows. A granary is a stock, and it can produce a flow of grain to eat, but that flow will end if the stock is not sufficiently replenished with every harvest. A person is a stock, which can produce work every week, but to make that last we need to create and train new people. Many kinds of stocks have limits on the flows they can produce. While you might be able to pull grain from a granary as fast as you like, you can only pull one hour of work from a worker per hour.

Natural limits on the flows that our stocks can produce have in the past limited the destructiveness of war. Even when war burned the crops, knocked down stone buildings, and killed most of the people, farmland usually bounced back in a few years, and human and animal populations could grow back in a few generations. Stones were restacked to make new buildings. The key long-term stocks of tech and culture were preserved, allowing for a quick rebuilding of previous professions, towns, and trade routes.

Future technologies are likely to have weaker limits on the conversion of stocks into flows. When we have more fishing boats we can more quickly deplete the stock of fish. Instead of water wheels that must wait for water to come down a stream, we make dams that give us water when we want. When we tap oil wells instead of killing whales for oil, the rate at which we can extract oil grows with the size and number of our wells. Eventually we may tap the sun itself not just by basking in its sunlight, but by uplifting its material and running more intense fusion reactors.

Our stronger abilities to turn stocks into flows can be great in peacetime, but they are problematic in wartime. Yes, the side with stronger abilities gains an advantage in war, but after a fierce war the stocks will be lower. Thus improving technology is making war more destructive, not just by blowing up more with each bomb, but by allowing more resources to be tapped more quickly to support war efforts.

This is another way of saying what I was trying to say in my last post: improving tech can make war more destructive, increasing the risk of extinction via war. When local nature was a key stock, diminishing returns in extracting resources from nature limited how much we could destroy during total war. In contrast, when resources can be extracted as fast and easy as grain from a granary, war is more likely to take nearly all of the resources.

Future civilization should make resources more accessible, not just to extract more kinds of slow flows, but also to extract fast flows more cheaply. While this will make it easier to flexibly use such stocks in peacetime, it also suggests a faster depletion of stocks during total war. Only the stocks that cannot be depleted, like technology and culture, may remain. And once the sun is available as a rapidly depletable resource, it may not take many total wars to deplete it.

This seems to me our most likely future great filter, and thus extinction risk. War becomes increasingly destructive, erasing stocks that are not fully replenished between wars, and often taking us to the edge of a small fragile population that could be further reduced by other disasters. And if the dominant minds and cultures speed up substantially, as I expect, that might speed up the cycle of war, allowing less time to recover between total wars.

GD Star Rating
loading...
Tagged as: , ,

Beware General Visible Prey

Charles Stross recently on possible future great filters:

So IO9 ran a piece by George Dvorsky on ways we could wreck the solar system. And then Anders Sandberg responded in depth on the subject of existential risks, asking what conceivable threats have big enough spatial reach to threaten an interplanetary or star-faring civilization. … The implication of an [future great filter] is that it doesn’t specifically work against life, it works against interplanetary colonization. … much as Kessler syndrome could effectively block all access to low Earth orbit as a side-effect of carelessly launching too much space junk. Here are some example scenarios: …

Simplistic warfare: … Today’s boringly old-hat chemical rockets, even in the absence of nuclear warheads, are formidably destructive weapons. … War, or other resource conflicts, within a polity capable of rapid interplanetary or even slow interstellar flight, is a horrible prospect.

Irreducible complexity: I take issue with one of Anders’ assumptions, which is that a multi-planet civilization is … not just … distributed, but it will almost by necessity have fairly self-sufficient habitats that could act as seeds for a new civilization if they survive. … I doubt that we could make a self-sufficient habitat that was capable of maintaining its infrastructure and perpetuating and refreshing its human culture with a population any smaller than high-single-digit millions. … Building robust self-sufficient off-world habitats … is vastly more expensive than building an off-world outpost and shipping rations there, as we do with Antarctica. …

Griefers: … All it takes is one civilization of alien ass-hat griefers who send out just one Von Neumann Probe programmed to replicate, build N-D lasers, and zap any planet showing signs of technological civilization, and the result is a galaxy sterile of interplanetary civilizations until the end of the stelliferous era. (more)

These are indeed scenarios of concern. But I find it hard to see how, by themselves, they could add up to a big future filter. Continue reading "Beware General Visible Prey" »

GD Star Rating
loading...
Tagged as: , , ,

Ford’s Rise of Robots

In the April issue of Reason magazine I review Martin Ford’s new book Rise of the Robots:

Basically, Ford sees a robotic catastrophe coming soon because he sees disturbing signs of the times: inequality, job loss, and so many impressive demos. It’s as if he can feel it in his bones: Dark things are coming! We know robots will eventually take most jobs, so this must be now. … [But] In the end, it seems that Martin Ford’s main issue really is that he dislikes the increase in inequality and wants more taxes to fund a basic income guarantee. All that stuff about robots is a distraction. (more)

I’ll admit Ford is hardly alone, and he ably summarizes what are quite common views. Even so, I’m skeptical.

GD Star Rating
loading...
Tagged as: ,

Growth Could Slow

Human history has seen accelerating growth, via a sequence of faster growth modes. First humans grew faster than other primates, then farmers grew faster than foragers, and recently industry has grown faster than farming. Most likely, another even faster growth mode lies ahead. But it is worth remembering that this need not happen. For a very concrete historical analogue, the Cambrian Explosion of multi-cellular life seems to have resulted from an accelerating series of key transitions. But then around 520 million years ago, after life had explored most multi-cellular variations, change slowed way down:

In just a few tens of millions of years – a geological instant – almost every major animal group we know made its first appearance in the fossil record, and the ecology of the planet was transformed forever. …

Scientists have struggled to explain what sparked this sudden burst of innovation. Until recently, most efforts tried to find a single trigger, but over the past year or two, a different explanation has begun to emerge. The Cambrian explosion appears to have been life’s equivalent of the perfect storm. Instead of one trigger, there was a whole array of them amplifying one another to generate a hotbed of animal evolution the likes of which the world has never seen before or since. …

The first sign of multicellular animals is in rocks about 750 million years old, which contain fossilised biomolecules found today only in sponges. Then another 150 million apparently uneventful years passed before the appearance of the Ediacaran fauna. This enigmatic group of multicellular organisms of uncertain affinities to other lifeforms flourished in the oceans up to the beginning of the Cambrian. Then [110 million years later] all hell broke loose. … Studies of “molecular clocks” – which use the gradual accumulation of genetic changes to estimate when particular evolutionary branches diverged – suggest that animal complexity emerged before the Cambrian. …

Two huge ecological innovations that make their debut in the Cambrian fossil record. …The first is the ability to burrow into the sea floor. … The second innovation was predation. … What else were these early creatures waiting for? One intriguing possibility is that they were waiting for fertiliser. Geological evidence suggests that rising sea levels during the Cambrian could have increased erosion, boosting levels of nutrients such as calcium, phosphate and potassium in the oceans. …

Atmospheric oxygen levels crept up gradually. … The crucial threshold seemed to be between 1 and 5 per cent of present oxygen levels. Geochemists’ best guess at when the ancient oceans reached this point is about 550 million years ago – just in time to kick off predation and its resulting ecological feedback. …

Precambrian oceans were full of single-celled algae and bacteria. When these small cells died, they would have started to sink, decomposing quickly as they went – and because decomposition consumes oxygen, this would have kept ocean waters anoxic. Filter-feeding sponges, which evolved sometime before the Ediacaran,then started clearing these cells out of the water column before they died and decomposed. The sponges themselves, being larger, were more likely to be buried in the sediment after death, allowing oxygen to remain in the water. Over time, this would have led ever more of the ocean to become oxygenated. (more)

So it remains possible that growth will slow down now, or after the next transition, even if a new series of accelerating transitions lies far ahead.

GD Star Rating
loading...
Tagged as: ,

The Evolution-Is-Over Fallacy

David Brin and Jerome Barkow both responded to my last Cato Unbound comment by assuming that the evolution of aliens would end at somewhere around our human level of development. While aliens would acquire new tech, there would be little further change in their preferences or basic psychology over the following millions or billions of years. In my latest comment, I mainly just repeat what I’d said before:

Even when each creature has [powerful tech and] far broader control [over its local environment], this won’t prevent selection from favoring creatures who better use their controls to survive and reproduce. No, what is required to stop selection is very broad and strong coordination. As I wrote:

Yes it is possible that a particular group of aliens will somehow take collective and complete control over all local evolution early in their history, and thereby forever retain their early styles. … Such collective control requires quite advanced coordination abilities. … Anything less than complete control of evolution would not end evolution; it would instead create a new environment for adaptation.

My guess is that even when this happens, it will only be after a great degree of adaptation to post-biological possibilities. So even then adaptation to advanced technology should be useful in predicting their behaviors.

I’ll call this mistake the “evolution is over” fallacy, and I nominate it as the most important fallacy about aliens, and our future. Evolutionary selection of preferences and psychology is not tied to DNA-based replication, or to making beings out of squishy proteins, or to a lack of intelligence. Selection is instead a robust long-run feature of decentralized competition. The universe is influenced more by whatever wins competitions for influence; where competition continues, selection also continues.

GD Star Rating
loading...
Tagged as: , ,

“Slow” Growth Is Cosmo-Fast

In my first response to Brin at Cato Unbound (and in one followup),  I agreed with him that we shouldn’t let each group decide if to yell to aliens. In my second response, I criticize Brin’s theory that the universe is silent because most alien civilizations fall into slowly-innovating “feudal” societies like those during the farmer era:

We have so far had three eras of growth: forager, farmer, and industry. … In all three eras, growth was primarily caused by innovation. …

A thousand doublings of the economy seems plenty to create a very advanced civilization. After all, that would give a factor of ten to the power of three hundred increase in economic capacity, and there are only roughly ten to the eighty atoms in the visible universe. Yes, at our current industry rates of growth, we’d produce that much growth in only fifteen thousand years, while at farmer rates of growth it would take a million years.

But a million years is still only a small blip of cosmological time. It is even plausible for a civilization to reach very advanced levels while growing at the much slower forager rate. While a civilization growing at forager rates would take a quarter billion years to grow a thousand factors of two, the universe is thirteen billion years old, and our planet is four billion. So there has been plenty of time for very slow growing aliens to become very advanced. (more)

GD Star Rating
loading...
Tagged as: , ,

AI Boom Bet Offers

A month ago I mentioned that lots of folks are now saying “this time is different” – we’ll soon see a big increase in jobs lost to automation, even though we’ve heard such warnings every few decades for centuries. Recently Elon Musk joined in:

The risk of something seriously dangerous happening is in the five year timeframe … 10 years at most.

If new software will soon let computers take over many more jobs, that should greatly increase the demand for such software. And it should greatly increase the demand for computer hardware, which is a strong complement to software. So we should see a big increase in the quantity of computer hardware purchased. The US BEA has been tracking the fraction of the US economy devoted to computer and electronics hardware. That fraction was 2.3% in 1997, 1.7% in 2003, and 1.58% in 2008, and 1.56% in 2012. I offer to bet that this number won’t rise above 5% by 2025. And I’ll give 20-1 odds! So far, I have no takers.

The US BLS tracks the US labor share of income, which has fallen from 64% to 58% in the last decade, a clear deviation from prior trends. I don’t think this fall is mainly due to automation, and I think it may continue to fall for those other reasons. Even so, I think this figure rather unlikely to fall below 40% by 2025. So I bet Chris Hallquist at 12-1 odds against this (my $1200 to his $100).

Yes it would be better to bet on software demand directly, and on world stats, not just US stats. But these stats seem hard to find.

Added 3p: US CS/Eng college majors were: 6.5% in ’70, 9.7% in ’80, 9.6% in ’90, 9.4% in ’00, 7.9% in ’10. I’ll give 8-1 odds against > 15% by 2025. US CS majors were: 2.4K in ’70, 15K in ’80, 25K in ’90, 44K in ’00, 59K in ’03, 43K in ’10 (out of 1716K total grads). I’ll give 10-1 against > 200K by 2025.

Added 9Dec: On twitter @harryh accepted my 20-1 bet for $50. And Sam beats my offer: 

GD Star Rating
loading...
Tagged as: , , ,

Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:

SoftwareIntensity

In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
loading...
Tagged as: , ,

Hope For A Lumpy Filter

The great filter is the sum total of all of the obstacles that stand in the way of a simple dead planet (or similar sized material) proceeding to give rise to a cosmologically visible civilization. As there are 280 stars in the observable universe, and 260 within a billion light years, a simple dead planet faces at least roughly 60 to 80 factors of two obstacles to birthing a visible civilization within 13 billion years. If there is panspermia, i.e., a spreading of life at some earlier stage, the other obstacles must be even larger by the panspermia life-spreading factor.

We know of a great many possible candidate filters, both in our past and in our future. The total filter could be smooth, i.e. spread out relatively evenly among all of these candidates, or it could be lumpy, i.e., concentrated in only one or a few of these candidates. It turns out that we should hope for the filter to be lumpy.

For example, imagine that there are 15 plausible filter candidates, 10 in our past and 5 in our future. If the filter is maximally smooth, then given 60 total factors of two, each candidate would have four factors of two, leaving twenty in our future, for a net chance for us now of making it through the rest of the filter of only one in a million. On the other hand, if the filter is maximally lumpy, and all concentrated in only one random candidate, then we have a 2/3 chance of facing no filter at all in our future. Thus a lumpy filter gives us a much better chance of making it.

For “try-try” filters, a system can keep trying over and over until it succeeds. If a set of try-try steps must all succeed within the window of life on Earth, then the actual times to complete each step must be drawn from the same distribution, and so take similar times. The time remaining after the last step must also be drawn from a similar distribution.

A year ago I reported on a new study estimating that 1.75 to 3.25 billion years remains for life on Earth. This is a long time, and implies that there can’t be many prior try-try filter steps within the history of life on Earth. Only one or two, and none in the last half billion years. This suggests that the try-try part of the great filter is relatively lumpy, at least for the parts that have and will take place on Earth. Which according to the analysis above is good news.

Of course there can be other kinds of filter steps. For example, perhaps life has to hit on the right sort of genetic code right from the start; if life hits on the wrong code, life using that code will entrench itself too strongly to let the right sort of life take over. These sort of filter steps need not be roughly evenly distributed in time, and so timing data doesn’t say much about how lumpy or uniform are those steps.

It is nice to have some good news. Though I should also remind you of the bad news that anthropic analysis suggests that selection effects make future filters more likely than you would have otherwise thought.

GD Star Rating
loading...
Tagged as: ,

Great Filter TEDx

This Saturday I’ll speak on the great filter at TEDx Limassol in Cyprus. Though I first wrote about the subject in 1996, this is actually the first time I’ve been invited to speak on it. It only took 19 years. I’ll post links here to slides and video when available.

Added 22Sep: A preliminary version of the video can be found here starting at minute 34.

Added 12Dec: The video is finally up:

GD Star Rating
loading...
Tagged as: , ,