Tag Archives: Future

Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible. 

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Perpetual Motion Via Negative Matter?

One of the most important things we will ever learn about the universe is just how big it is, practically, for our purposes. In the last century we’ve learned that it it is far larger than we knew, in a great many ways. At the moment we are pretty sure that it is about 13 billion years old, and that it seems much larger in spatial directions. We have decent estimates for both the total space-time volume we can ever see, and all that we can ever influence.

For each of these volumes, we also have decent estimates of the amount of ordinary matter they contain, how much entropy that now contains, and how much entropy it could create via nuclear reactions. We also have decent estimates of the amount of non-ordinary matter, and of the much larger amount of entropy that matter of all types could produce if collected into black holes.

In addition, we have plausible estimates of how (VERY) long it will take to actually use all that potential entropy. If you recall, matter and volume is what we need to make stuff, and potential entropy, beyond current actual entropy, (also known as “negentropy”) is they key resource needed to drive thus stuff in desired directions. This includes both biological life and artificial machinery.

Probably the thing we most care about doing with all that stuff in the universe this is creating and sustaining minds like ours. We know that this can be done via bodies and brains like ours, but it seems that far more minds could be supported via artificial computer hardware. However, we are pretty uncertain about how much computing power it takes (when done right) to support a mind like ours, and also about how much matter, volume, and entropy it takes (when done right) to produce any given amount of computing power.

For example, in computing theory we don’t even know if P=NP. We think this claim is false, but if true it seems that we can produce vastly more useful computation with any given amount of computing power, which probably means sustaining a lot more minds. Though I know of no concrete estimate of how many more.

It might seem that at least our physics estimates of available potential entropy are less uncertain that this, but I was recently reminded that we actually aren’t even sure that this amount is finite. That is, it might be that our universe has no upper limit to entropy. In which case, one could keep run physical processes (like computers) that increase entropy forever, create proverbial “perpetual motion machines”. Some say that such machines are in conflict with thermodynamics, but that is only true if there’s a maximum entropy.

Yes, there’s a sense in which a spatially infinite universe has infinite entropy, but that’s not useful for running any one machine. Yes, if it were possible to perpetually create “baby universes”, then one might perpetually run a machine that can fit each time into the entrance from one universe into its descendant universe. But that may be a pretty severe machine size limit, and we don’t actually know that baby universes are possible. No, what I have in mind here is the possibility of negative mass, which might allow unbounded entropy even in a finite region of ordinary space-time.

Within the basic equations of Newtonian physics lie the potential for an exotic kind of matter: negative mass. Just let the mass of some particles be negative, and you’ll see that gravitationally the negative masses push away from each other, but are drawn toward the positive masses, which are drawn toward each other. Other forces can exist too, and in terms of dynamics, it’s all perfectly consistent.

Now today we formally attribute the Casimir effect to spatial regions filled with negative mass/energy, and we sometimes formally treat the absence of a material as another material (think of bubbles in water), and these often formally have negative mass. But other than these, we’ve so far not seen any material up close that acts locally like it has negative mass, and this has been a fine reason to ignore the possibility.

However, we’ve known for a while now that over 95% of the universe seems to be made of unknown stuff that we’ve never seen interact with any of the stuff around us, except via long distance gravity interactions. And most of that stuff seems to be a “dark energy” which can be thought of as having a negative mass/energy density. So negative mass particles seem a reasonable candidate to consider for this strange stuff. And the reason I thought about this possibility recently is that I came across this article by Jamie Farnes, and associated commentary. Farnes suggests negative mass particles may fill voids between galaxies, and crowd around galaxies compacting them, simultaneously explaining galaxy rotation curves and accelerating cosmic expansion.

Apparently, Einstein considered invoking negative mass particles to explain (what he thought was) the observed lack of cosmic expansion, before he switched to a more abstract explanation, which he dropped after cosmic expansion was observed. Some say that Farnes’s attempt to integrate negative mass into general relative and quantum particle physics fails, and I have no opinion on that. Here I’ll just focus on simpler physics considerations, and presume that there must be some reasonable way to extend the concept of negative mass particles in those directions.

One of the first things one usually learns about negative mass is what happens in the simple scenario wherein two particles with exactly equal and opposite masses start off exactly at rest relative to one another, and have any force between them. In this scenario, these two particles accelerate together in the same direction, staying at the same relative distance, forevermore. This produces arbitrarily large velocities in simple Newtonian physics, and arbitrarily larger absolute masses in relativistic physics. This seems a crazy result, and it probably put me off from of the negative mass idea when I first heard about it.

But this turns out to be an extremely unusual scenario for negative mass particles. Farnes did many computer simulations with thousands of gravitationally interacting negative and positive mass particles of exactly equal mass magnitudes. These simulations consistently “reach dynamic equilibrium” and “no runaway particles were detected”. So as a matter of practice, runaway seems quite rare, at least via gravity.

A related worry is that if there were a substantial coupling associated with making pairs of positive and negative mass particles that together satisfy relative conservation laws, such pairs would be created often, leading to a rapid and apparently unending expansion in total particle number. But the whole idea of dark stuff is that it only couples very weakly to ordinary matter. So if we are to explain dark stuff via negative mass particles, we can and should postulate no strong couplings that allow easy creation of pairs of positive and negative mass particles.

However, even if the postulate of negative mass particles were consistent with all of our observations of a stable pretty-empty universe (and of course that’s still a big if), the runaway mass pair scenario does at least weakly suggest that entropy may have no upper bound when negative masses are included. The stability we observe only suggests that current equilibrium is “metastable” in the sense of not quickly changing.

Metastability is already known to hold for black holes; merging available matter into a few huge black holes could vastly increase entropy, but that only happens naturally at a very slow rate. By making it happen faster, our descendants might greatly increase their currently available potential entropy. Similarly, our descendants might gain even more potential entropy by inducing interactions between mass and negative mass that would naturally be very rare.

That is, we don’t even know if potential entropy is finite, even within a finite volume. Learning that will be very big news, for good or bad.

GD Star Rating
a WordPress rating system
Tagged as: ,

The Aristillus Series

There’s a contradiction at the heart of science fiction. Science fiction tends to celebrate the engineers and other techies who are its main fans. But there are two conflicting ways to do this. One is to fill a story with credible technical details, details that matter to the plot, and celebrate characters who manage this detail well. The other approach is to present tech as the main cause of an impressive future world, and of big pivotal events in that world.

The conflict comes from it being hard to give credible technical details about an impressive future world, as we don’t know much about future tech. One can give lots of detail about current tech, but people aren’t very impressed with the world they live in (though they should be). Or one can make up detail about future tech, but that detail isn’t very credible.

A clever way to mitigate this conflict is to introduce one dramatic new tech, and then leave all other tech the same. (Vinge gave a classic example.) Here, readers can be impressed by how big a difference one new tech could make, and yet still revel in heroes who win in part by mastering familiar tech detail. Also, people like me who like to think about the social implications of tech can enjoy a relatively manageable task: guess how one big new tech would change an otherwise familiar world.

I recently enjoyed the science fiction book pair The Aristillus Series: Powers of the Earth, and Causes of Separation, by Travis J I Corcoran (@MorlockP), funded in part via Kickstarter, because it in part followed this strategy. Also, it depicts betting markets as playing a small part in spreading info about war details. In addition, while most novels push some sort of unrealistic moral theme, the theme here is at least relatively congenial to me: nice libertarians seek independence from a mean over-regulated Earth:

Earth in 2064 is politically corrupt and in economic decline. The Long Depression has dragged on for 56 years, and the Bureau of Sustainable Research is making sure that no new technologies disrupt the planned economy. Ten years ago a band of malcontents, dreamers, and libertarian radicals used a privately developed anti-gravity drive to equip obsolete and rusting sea-going cargo ships – and flew them to the moon.There, using real world tunnel-boring-machines and earth-moving equipment, they’ve built their own retreat.

The one big new tech here is anti-gravity, made cheaply from ordinary materials and constructible by ordinary people with common tools. One team figures it out, and for a long time no other team has any idea how to do it, or any remotely similar tech, and no one tries to improve it; it just is.

Attaching antigrav devices to simple refitted ocean-going ships, our heroes travel to the moon, set up a colony, and create a smuggling ring to transport people and stuff to there. Aside from those magic antigravity devices, these books are choc full of technical mastery of familiar tech not much beyond our level, like tunnel diggers, guns, space suits, bikes, rovers, crypto signatures, and computers software. These are shown to have awkward gritty tradeoffs, like most real tech does.

Alas, Corcoran messes this up a bit by adding two more magic techs: one superintelligent AI, and a few dozen smarter-than-human dogs. Oh and the same small group is implausibly responsible for saving all three magic techs from destruction. As with antigravity, in each case one team figures it out, no other team has any remotely similar tech, and no one tries to improve them. But these don’t actually matter that much to the story, and I can hope they will be cut if/when this is made into a movie.

The story begins roughly a decade after the moon colony started, when it has one hundred thousand or a million residents. (I heard conflicting figures at different points.) Compared to Earth folk, colonists are shown as enjoying as much product variety, and a higher standard of living. This is attributed to their lower regulation.

While Earth powers dislike the colony, they are depicted at first as being only rarely able to find and stop smugglers. But a year later, when thousands of ships try to fly to the moon all at once from thousands of secret locations around the planet, Earth powers are depicted as being able to find and shoot down 90% of them. Even though this should be harder when thousands fly at once. This change is never explained.

Even given the advantage of a freer economy, I find it pretty implausible that a colony could be built this big and fast with this level of variety and wealth, all with no funding beyond what colonists can carry. The moon is a long way from Earth, and it is a much harsher environment. For example, while colonists are said to have their own chip industry to avoid regulation embedded in Earth chips, the real chip industry has huge economies of scale that make it quite hard to serve only one million customers.

After they acquire antigrav tech, Earth powers go to war with the moon. As the Earth’s economy is roughly ten thousand times larger that the moon’s, without a huge tech advantage is a mystery why anyone thinks the moon has any chance whatsoever to win this war.

The biggest blunder, however, is that no one in the book imagines using antigrav tech on Earth. But if the cost to ship stuff to the moon using antigrav isn’t crazy high, then antigravity must make it far cheaper to ship stuff around on Earth. Antigrav could also make tall buildings cheaper, allowing much denser city centers. The profits to be gained from these applications seem far larger than from smuggling stuff to a small poor moon colony.

So even if we ignore the AI and smart dogs, this still isn’t a competent extrapolation of what happens if we add cheap antigravity to a world like ours. Which is too bad; that would be an interesting scenario to explore.

Added 5:30p: In the book, antigrav is only used to smuggle stuff to/from moon, until it is used to send armies to the moon. But demand for smuggling should be far larger between places on Earth. In the book thousands of ordinary people are seen willing to make their own antigrav devices to migrate to moon, But a larger number should be making such devices to smuggle stuff around on Earth.

GD Star Rating
a WordPress rating system
Tagged as: , , ,

Stubborn Attachments

Tyler Cowen’s new book, Stubborn Attachments, says many things. But his main claims are, roughly, 1) we should care much more about people who will live in the distant future, and 2) promoting long-run economic growth is a robust way to achieve that end. As a result, we should try much harder to promote long-run economic growth.

Now I don’t actually think his arguments are that persuasive to those inclined to disagree. On 1), the actions of most people suggest that they don’t actually care much about the distant future, and there exist quite consistent preferences (including moral preferences) to represent this position. (Also, I have to wonder how much Tyler cares, as in the 20 years I’ve known him I’ve often worked on distant future issues, and he’s shown almost no interest in such things.)

On 2), while Tyler mainly argues for econ growth by pointing to good trends over the last few centuries, many people see bad trends as outweighing the good, and many others see recent trends as temporary historical deviations. Tyler also doesn’t consider that future techs which speed population growth could cut the connection observed recently between total and per-capita growth; I describe such a scenario in my book Age of Em.

Tyler being Tyler, he is generally vague and gives himself many outs to avoid criticism. For example, he says that rights should take priority over growth, but he doesn’t specify those rights. He says he only advocates growing “wealth plus” which includes any good thing you could want, so don’t complain that growth will hurt a good thing. He notes that the priority on growth can justify the usual intuition excusing limited redistribution, but doesn’t mention that this won’t at all excuse not doing everything possible to promote growth. He says he isn’t committed to econ growth being possible forever, but only to a finite chance of eternal growth. Yet focusing all policy on trying to increase growth within some tiny-chance eternal growth scenario is overwhelmingly likely to seem a huge mistake later.

However, as I personally happen to agree with his main claims, at least the way I phrased them, I’d rather focus on their implications, which Tyler severely neglects. The following are the only “concrete” things he says about how exactly to promote long term econ growth:

For some more concrete recommendations, I’ll suggest the following: a) Policy should be more forward-looking and more concerned about the more distant future. b) Governments should place a much higher priority on investment than is currently the case, in both the private sector and the public sector. … c) Policy should be more concerned with economic growth, properly specified, and policy discussion should pay less heed to other values. … d) We should be more concerned with the fragility of our civilization. … e) We should be more charitable on the whole, but we are not obliged to give away all of our wealth. … f) We can embrace much of common sense morality with the knowledge that it is not inconsistent with a deeper ethical theory. … g) When it comes to most “small” policies affecting the present and the near-present only, we should be agnostic.

More “investment” and “growth”, that’s it?! We actually know of many more specific ways to encourage choices that promote long term growth, but they mostly come at substantial costs. I don’t how much you actually support faster long-term growth until I hear which such policies you’ll support. Continue reading "Stubborn Attachments" »

GD Star Rating
a WordPress rating system
Tagged as: , , ,

On the Future by Rees

In his broad-reaching new book, On the Future, aging famous cosmologist Martin Rees says aging famous scientists too often overreach:

Scientists don’t improve with age—that they ‘burn out’. … There seem to be three destinies for us. First, and most common, is a diminishing focus on research. …

A second pathway, followed by some of the greatest scientists, is an unwise and overconfident diversification into other fields. Those who follow this route are still, in their own eyes, ‘doing science’—they want to understand the world and the cosmos, but they no longer get satisfaction from researching in the traditional piecemeal way: they over-reach themselves, sometimes to the embarrassment of their admirers. This syndrome has been aggravated by the tendency for the eminent and elderly to be shielded from criticism. …

But there is a third way—the most admirable. This is to continue to do what one is competent at, accepting that … one can probably at best aspire to be on a plateau rather than scaling new heights.

Rees says this in a book outside his initial areas of expertise, a book that has gained many high profile fawning uncritical reviews, a book wherein he whizzes past dozens of topics just long enough to state his opinion, but not long enough to offer detailed arguments or analysis in support. He seems oblivious to this parallel, though perhaps he’d argue that the future is not “science” and so doesn’t reward specialized study. As the author of a book that tries to show that careful detailed analysis of the future is quite possible and worthwhile, I of course disagree.

As I’m far from prestigious enough to get away a book like his, let me instead try to get away with a long probably ignored blog post wherein I take issue with many of Rees’ claims. While I of course also agree with much else, I’ll focus on disagreements. I’ll first discuss his factual claims, then his policy/value claims. Quotes are indented; my responses are not.  Continue reading "On the Future by Rees" »

GD Star Rating
a WordPress rating system
Tagged as: ,

Vulnerable World Hypothesis

I’m a big fan of Nick Bostrom; he is way better than almost all other future analysts I’ve seen. He thinks carefully and writes well. A consistent theme of Bostrom’s over the years has been to point out future problems where more governance could help. His latest paper, The Vulnerable World Hypothesis, fits in this theme:

Consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? … Maybe … ban all research in nuclear physics … [Or] eliminate all glass, metal, or sources of electrical current. … Societies might split into factions waging a civil wars with nuclear weapons, … end only when … nobody is able any longer to put together a bomb … from stored materials or the scrap of city ruins. …

The ​vulnerable world hypothesis​ [VWH] … is that there is some level of technology at which civilization almost certainly gets destroyed unless … civilization sufficiently exits the … world order characterized by … limited capacity for preventive policing​, … limited capacity for global governance.​ … [and] diverse motivations​. … It is ​not​ a primary purpose of this paper to argue VWH is true. …

Four types of civilizational vulnerability. … in the “easy nukes” scenario, it becomes too easy for individuals or small groups to cause mass destruction. … a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. … counterfactual in which a preemptive counterforce [nuclear] strike is more feasible. … the problem of global warming [could] be far more dire … if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlook …

two possible ways of achieving stabilization: Create the capacity for extremely effective preventive policing.​ … and create the capacity for strong global governance. … While some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. …

It goes without saying there are great difficulties, and also very serious potential downsides, in seeking progress towards (a) and (b). In this paper, we will say little about the difficulties and almost nothing about the potential downsides—in part because these are already rather well known and widely appreciated.

I take issue a bit with this last statement. The vast literature on governance shows both many potential advantages of and problems with having more relative to less governance. It is good to try to extend this literature into futuristic considerations, by taking a wider longer term view. But that should include looking for both novel upsides and downsides. It is fine for Bostrom to seek not-yet-appreciated upsides, but we should also seek not-yet-appreciated downsides, such as those I’ve mentioned in two recent posts.

While Bostrom doesn’t in his paper claim that our world is in fact vulnerable, he released his paper at time when many folks in the tech world have been claiming that changing tech is causing our world to in fact become more vulnerable over time to analogies of his “easy nukes” scenario. Such people warn that it is becoming easier for smaller groups and individuals to do more damage to the world via guns, bombs, poison, germs, planes, computer hacking, and financial crashes. And Bostrom’s book Superintelligence can be seen as such a warning. But I’m skeptical, and have yet to see anyone show a data series displaying such a trend for any of these harms.

More generally, I worry that “bad cases make bad law”. Legal experts say it is bad to focus on extreme cases when changing law, and similarly it may go badly to focus on very unlikely but extreme-outcome scenarios when reasoning about future-related policy. It may be very hard to weigh extreme but unlikely scenarios suggesting more governance against extreme but unlikely scenarios suggesting less governance. Perhaps the best lesson is that we should make it a priority to improve governance capacities, so we can better gain upsides without paying downsides. I’ve been working on this for decades.

I also worry that existing governance mechanisms do especially badly with extreme scenarios. The history of how the policy world responded badly to extreme nanotech scenarios is a case worth considering.

Added 8am:

Kevin Kelly in 2012:

The power of an individual to kill others has not increased over time. To restate that: An individual — a person working alone today — can’t kill more people than say someone living 200 or 2,000 years ago.

Anders Sandberg in 2018:

Added 19Nov: Vox quotes from this article.

GD Star Rating
a WordPress rating system
Tagged as: , ,

World Government Risks Collective Suicide

If your mood changes every month, and if you die in any month where your mood turns to suicide, then to live 83 years you need to have one thousand months in a row where your mood doesn’t turn to suicide. Your ability to do this is aided by the fact that your mind is internally divided; while in many months part of you wants to commit suicide, it is quite rare for a majority coalition of your mind to support such an action.

In the movie Lord of the Rings, Denethor Steward of Gondor is in a suicidal mood when enemies attack the city. If not for the heroics of Gandalf, that mood might have ended his city. In the movie Dr. Strangelove, the crazed General Ripper “believes the Soviets have been using fluoridation of the American water supplies to pollute the `precious bodily fluids’ of Americans” and orders planes to start a nuclear attack, which ends badly. In many mass suicides through history, powerful leaders have been able to make whole communities commit suicide.

In a nuclear MAD situation, a nation can last unbombed only as long as no one who can “push the button” falls into a suicidal mood. Or into one of a thousand other moods that in effect lead to misjudgments and refusals to listen to reason, that eventually leads to suicide. This is a serious problem for any nuclear nation that wants to live long relative to number of people who can push the button, times the timescale on which moods change. When there are powers large enough that their suicide could take down civilization, then the risk of power suicide becomes a risk of civilization suicide. Even if the risk is low in any one year, over the long run this becomes a serious risk.

This is a big problem for world or universal government. We today coordinate on the scale of firms, cities, nations, and internationals organizations. However, the fact that we also fail to coordinate to deal with many large problems on these scales shows that we face severe limits in our coordination abilities. We also face many problems that could be aided by coordination via world government, and future civilizations will be similarly tempted by the coordination powers of central governments.

But, alas, central power risks central suicide, either done directly on purpose or as an indirect consequence of other broken thinking. In contrast, in a sufficiently decentralized world when one power commits suicide, its place and resources tend to be taken by other powers who have not committed suicide. Competition and selection is a robust long-term solution to suicide, in a way that centralized governance is not.

This is my tentative best guess for the largest future filter that we face, and that other alien civilizations have faced. The temptation to form central governments and other governance mechanisms is strong, to solve immediate coordination problems, to help powerful interests gain advantages via the capture of such central powers, and to sake the ambition thirst of those who would lead such powers. Over long periods this will seem to have been a wise choice, until suicide ends it all and no one is left to say “I told you so.”

Divide the trillions of future years over which we want to last over the increasingly short periods over which moods and sanity changes, and you see a serious problem, made worse by the lack of a sufficiently long view to make us care enough to solve it. For example, if the suicide mood of a universal government changed once a second, then it needs about 1020 non-suicide moods in a row to last a trillion years.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Social Media Lessons

Women consistently express more interest than men in stories about weather, health and safety, natural disasters and tabloid news. Men are more interested than women in stories about international affairs, Washington news and sports. (more)

Tabloid newspapers … tend to be simply and sensationally written and to give more prominence than broadsheets to celebrities, sports, crime stories, and even hoaxes. They also take political positions on news stories: ridiculing politicians, demanding resignations, and predicting election results. (more

Two decades ago, we knew nearly as much about computers, the internet, and the human and social sciences as we do today. In principle, this should have let us foresee broad trends in computer/internet applications to our social lives. Yet we seem to have been surprised by many aspects of today’s “social media”. We should take this as a chance to learn; what additional knowledge or insight would one have to add to our views from two decades ago to make recent social media developments not so surprising?

I asked this question Monday night on twitter and no one pointed me to existing essays on the topic; the topic seems neglected. So I’ve been pondering this for the last day. Here is what I’ve come up with.

Some people did use computers/internet for socializing twenty years ago, and those applications do have some similarities to applications today. But we also see noteworthy differences. Back then, a small passionate minority of mostly young nerdy status-aspiring men sat at desks in rare off hours to send each other text, via email and topic-organized discussion groups, as on Usenet. They tended to talk about grand big topics, like science and international politics, and were often combative and rude to each other. They avoided centralized systems to participate in many decentralized versions, using separate identities; it was hard to see how popular was any one person across all these contexts.

In today’s social media, in contrast, most everyone is involved, text is more often displaced by audio, pictures, and video, and we typically use our phones, everywhere and at all times of day. We more often forward what others have said rather than saying things ourselves, the things we forward are more opinionated and less well vetted, and are more about politics, conflict, culture, and personalities. Our social media talk is also more in these directions, is more noticeably self-promotion, and is more organized around our personal connections in more centralized systems. We have more publicly visible measures of our personal popularity and attention, and we frequently get personal affirmations of our value and connection to specific others. As we talk directly more via text than voice, and date more via apps than asking associates in person, our social interactions are more documented and separable, and thus protect us more from certain kinds of social embarrassment.

Some of these changes should have been predictable from lower costs of computing and communication. Another way to understand these changes is that the pool of participants changed, from nerdy young men to everyone. But the best organizing principle I can offer is: social media today is more lowbrow than the highbrow versions once envisioned. While over the 1800s culture separated more into low versus high brow, over the last century this has reversed, with low has been displacing high, such as in more informal clothes, pop music displacing classical, and movies displacing plays and opera. Social media is part of this trend, a trend that tech advocates, who sought higher social status for themselves and their tech, didn’t want to see.

TV news and tabloids have long been lower status than newspapers. Text has long been higher status than pictures, audio, and video. More carefully vetted news is higher status, and neutral news is higher status than opinionated rants. News about science and politics and the world is higher status that news about local culture and celebrities, which is higher status than personal gossip. Classic human norms against bragging and self-promotion reduce the status of those activities and of visible indicators of popularity and attention.

The mostly young male nerds who filled social media two decades ago and who tried to look forward envisioned high brow versions made for people like themselves. Such people like to achieve status by sparring in debates on the topics that fill high status traditional media. As they don’t like to admit they do this for status, they didn’t imagine much self-promotion or detailed tracking of individual popularity and status. And as they resented loss of privacy and strong concentrations of corporate power, and they imagined decentralized system with effectively anonymous participants.

But in fact ordinary people don’t care as much about privacy and corporate concentration, they don’t as much mind self-promotion and status tracking, they are more interested in gossip and tabloid news than high status news, they care more about loyalty than neutrality, and they care more about gaining status via personal connections than via grand-topic debate sparring. They like wrestling-like bravado and conflict, are less interested in accurate vetting of news sources, like to see frequent personal affirmations of their value and connection to specific others, and fear being seen as lower status if such things do not continue at a sufficient rate.

This high to lowbrow account suggests a key question for the future of social media: how low can we go? That is, what new low status but commonly desired social activities and features can new social media offer? One candidate that occurs to me is: salacious gossip on friends and associates. I’m not exactly sure how it can be implemented, but most people would like to share salacious rumors about associates, perhaps documented via surveillance data, in a way that allows them to gain relevant social credit from it while still protecting them from being sued for libel/slander when rumors are false (which they will often be), and at least modestly protecting them from being verifiably discovered by their rumor’s target. That is, even if a target suspects them as the source, they usually aren’t sure and can’t prove it to others. I tentatively predict that eventually someone will make a lot of money by providing such a service.

Another solid if less dramatic prediction is that as social media spreads out across the world, it will move toward the features desired by typical world citizens, relative to features desired by current social media users.

Added 17 Nov: I wish I had seen this good Arnold Kling analysis before I wrote the above.

GD Star Rating
a WordPress rating system
Tagged as: , ,

Long Views Are Coming

One useful way to think about the future is to ask what key future dates are coming, and then to think about in what order they may come, in what order we want them, and how we might influence that order. Such key dates include extinction, theory of everything found, innovation runs out, exponential growth slows down, and most bio humans unemployed. Many key dates are firsts: alien life or civilization found, world government founded, off-Earth self-sufficient colony, big nuclear war, immortal born, time machine made, cheap emulations, and robots that can cheaply replace most all human workers. In this post, I want to highlight another key date, one that is arguably as important as any of the above: the day when the dominant actors take a long view.

So far history can be seen as a fierce competition by various kinds of units (including organisms, genes, and cultures) to control the distant future. Yet while this has resulted in very subtle and sophisticated behavior, almost all this behavior is focused on the short term. We see this in machine learning systems; even when they are selected to achieve end-of-game outcomes, they much prefer to do this via current behaviors that react to current stimuli. It seems to just be much harder to successfully plan on longer timescales.

Animal predators and prey developed brains to plan over short sections of a chase or fight. Human foragers didn’t plan much longer than that, and it took a lot of cultural selection to get human farmers to plan on the scale of a year, e.g., to save grain for winter eating and spring seeds. Today human organizations can consistently manage modest plans on the scale of a few years, but we fail badly when we try much larger or longer plans.

Arguably, competition and evolution will continue to select for units capable of taking longer views. And so if competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate.

And this seems a very good thing! Arguably, the biggest thing that goes wrong in the world today is that we fail to take a long view. Because we fail to much consider the long run in our choices, we put a vast great future at risk, such as by tolerating avoidable existential risks. This will end once the dominant units take a long view. At that point there may be fights on which direction the future should take, and coordination failures may lead to bad outcomes, but at least the future will not be neglected.

The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates. And working to hasten that day could be one of the most important things we can do to help the future. So I hereby call to action those who (say they) care about the distant future to help in this task.

A great feature of this task is that it doesn’t require great coordination; it is a “race to the top”. That is, it is in the interest of each cultural unit (nation, language, ethnicity, religion, city, firm, family, etc.) to figure out how to take effective long term views. So you can help the world by allying with a particular unit and helping it learn to take an effective long term view. You don’t have to choose between “selfishly” helping your unit, or helping the world as a whole.

One way to try to promote longer term views is to promote longer human lifespans. Its not that clear to me this works, however, as even immortals can prioritize the short run. And extending lifespans is very hard. But it is a fine goal in any case.

A bad way to encourage long views is to just encourage the adoption of plans that advocates now claim are effective ways to help in the long run. After all, it seems that one of the main obstacles so far to taking long views is the typical low quality of long-term plans offered. Instead, we must work to make long term planning processes more reliable.

My guess is that a key problem is worse incentives and accountability for those who make long term predictions, and who propose and implement longer term plans. If your five year plan goes wrong, that could wreck your career, but you might make a nice long comfy career our of a fifty year plan that will later go wrong. So we need to devise and test new ways to create better incentives for long term predictions and plans.

You won’t be surprised to hear me say I think prediction markets have promise as a way to create better incentives and accountability. But we haven’t experimented that much with long-term prediction markets, and they have some potential special issues, so there’s a lot of work to do to explore this approach.

Once we find ways to make more reliable long term plans, we will still face the problem that organizations are typically under the control of humans, who seem to consistently act on short term views. In my Age of Em scenario, this could be solved by having slower ems control long term choices, as they would naturally have longer term views. Absent ems, we may want to experiment with different cultural contexts for familiar ordinary humans, to see which can induce such humans to prioritize the long term.

If we can’t find contexts that make ordinary humans take long term views, we may want to instead create organizations with longer term views. One approach would be to release enough of them from tight human controls, and subject them to selection pressures that reward better long term views. For example, evolutionary finance studies what investment organizations free to reinvest all their assets would look like if they were selected for their ability to grow assets well.

Some will object to the creation of powerful entities whose preferences disagree with those of familiar humans alive at the time. And I admit that gives me pause. But if taken strictly that attitude seems to require that the future always remain neglected, if ordinary humans discount the future. I fear that may be too high a price to pay.

GD Star Rating
a WordPress rating system
Tagged as: ,

Long Legacies And Fights In A Competitive Universe

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

  1. I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.
  2. I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term known projects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

GD Star Rating
a WordPress rating system
Tagged as: , , ,