Tag Archives: Future

Bizarre Accusations

Imagine that you planned a long hike through a remote area, and suggested that it might help to have an experienced hunter-gather along as a guide. Should listeners presume that you intend to imprison and enslave such guides to serve you? Or is it more plausible that you propose to hire such people as guides?

To me, hiring seems the obvious interpretation. But, to accuse me of advancing a racist slavery agenda, Audra Mitchell and Aadita Chaudhury make the opposite interpretation in their 2020 International Relations article “Worlding beyond ‘the’ ‘end’ of ‘the world’: white apocalyptic visions and BIPOC futurisms”.

In a chapter “Catastrophe, Social Collapse, and Human Extinction” in the 2008 book Global Catastrophic Risks I suggested that we might protect against human extinction by populating underground refuges with people skilled at surviving in a world without civilization:

A very small human population would mostly have to retrace the growth path of our human ancestors; one hundred people cannot support an industrial society today, and perhaps not even a farming society. They might have to start with hunting and gathering, until they could reach a scale where simple farming was feasible. And only when their farming population was large and dense enough could they consider returning to industry.

So it might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right. Perhaps such people could be rotated periodically from a well protected region where they practiced simple lifestyles, so they could keep their skills fresh. And perhaps we should test our refuge concepts, isolating real people near them for long periods to see how well particular sorts of refuges actually perform at returning their inhabitants to a simple sustainable lifestyle.

On this basis, Mitchell and Chaudhury call me a “white futurist” and “American settler economist” seeking to preserve existing Euro-centric power structures:

Indeed, many contributors to ‘end of the world’ discourses offer strategies for the reconstruction and ‘improvement’ of existing power structures after a global catastrophe. For example, American settler economist Robin Hanson calculates that if 100 humans survived a global catastrophic disaster that killed all others, they could eventually move back through the ‘stages’ of ‘human’ development, returning to the ‘hunter-gatherer stage’ within 20,000 years and then ‘progressing’ from there to a condition equivalent to contemporary society (defined in Euro-centric terms). …

some white futurists express concerns about the ‘de-volution’ of ‘humanity’ from its perceived pinnacle in Euro-centric societies. For example, American settler economist Hanson describes the emergence of ‘humanity’ in terms of four ‘progressions’

And solely on the basis of my book chapter quote above, Mitchell and Chaudhury bizarrely claim that I “quite literally” suggest imprisoning and enslaving people of color “to enable the future re-generation of whiteness”:

To achieve such ideal futures, many writers in the ‘end of the world’ genre treat [black, indigenous, people of color] as instruments or objects of sacrifice. In a stunning display of white possessive logic, Hanson suggests that, in the face of global crisis, it

‘might make sense to stock a refuge with real hunter-gatherers and subsistence farmers, together with the tools they find useful. Of course, such people would need to be disciplined enough to wait peacefully in the refuge until the time to emerge was right.

In this imaginary, Hanson quite literally suggests the (re-/continuing)imprisonment, (re-/continuing)enslavement and biopolitical (re-/continuing) instrumentalization of living BIPOC in order to enable the future re-generation of whiteness. This echoes the dystopian nightmare world described in …

And this in a academic journal article that supposedly passed peer review! (I was not one of the “peers” consulted.)

To be very clear, I proposed to hire skilled foragers and subsistence farmers to serve in such roles, compensating them as needed to gain their consent. I didn’t much care about their race, nor about the race of the world that would result from their repopulating the world. And presumably someone with substantial racial motivations would in fact care more about that last part; how exactly does repopulating the world with people of color promote “whiteness”?

GD Star Rating
loading...
Tagged as: ,

MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
loading...
Tagged as: ,

Beware Upward Reference Classes

Sometimes when I see associates getting attention, I wonder, “do they really deserve more attention than me?” I less often look at those who get less attention than me, and ask whether I deserve more. Because they just don’t show up in my field of view as often; attention makes you more noticeable.

If I were to formalize my doubts, I might ask, “Among tenured econ professors, how much does luck and org politics influence who gets more funding, prestige, and attention?” And I might find many reasons to answer “lots”, and so suggest that such things be handed out more equally or randomly. Among tenured econ professors, that is. And if an economist with a lower degree, or a professor from another discipline, asked why they aren’t included in my comparison suggested redistribution, I might answer, “Oh I’m only talking about econ researchers here.”

Someone with a college econ degree might well ask if those with higher credentials like M.S., Ph.D., or a professor position really deserve the extra money, influence, and attention that they get. And if someone with only a high school degree were to ask why they aren’t included in this comparison, the econ degree person might say “oh, I’m only talking about economists here”, presuming that you can’t be considered an economists if you have no econ degree of any sort.

The pattern here is: “envy up, scorn down”. When considering fairness, we tend to define our comparison group upward, as everyone who has nearly as many qualification as we do or more, and then we ask skeptically if those in this group with more qualifications really deserve the extra gains associated with their extra qualifications. But we tend to look downward with scorn, assuming that our qualifications are essential, and thus should be baked into the definition of our reference class. That is, we prefer upward envy reference classes to justify our envying those above us, while rejecting others envying us from below.

Life on Earth has steadily increased in its abilities over time, allowing life to spread into more places and niches. We have good reasons to think that this trend may long continue, eventually allowing our descendants to spread through the universe, until they meet up with other advanced life, resulting in a universe dense with advanced life.

However, many have suggested that this view of the universe makes us today seem suspiciously early among what they see as the relevant comparison group. And thus they suggest we need a Bayesian update toward this view of the universe being less likely. But what exactly is a good comparison group? For example, if you said “We’d be very early among all creatures with access to quantum computers?”, I think we’d all get that this is not so puzzling, as the first quantum computers only appeared a few year ago.

We would also appear very early among all creatures who could knowingly ask the question “How many creatures will ever appear with feature X”, if the concept X applies to us but has only been recently introduced.  We’d also be pretty early among among all creatures who can express any question in language, if language was only invented in the last million years. It isn’t much better to talk about all creatures with self-awareness, if you say only primates and a few other animals count as having that, and they’ve only been around for a few million more years.

Thus in general in a universe where abilities improve over time, creatures that consider upward defined reference classes will tend to find themselves early. Often very early, if they insist that their class members have some very recently acquired abilities. But once you see this tendency to pick upward reference classes, the answers you get to such questions need no longer suggest updates against the hypothesis of long increasing abilities.

Furthermore, in an any universe that will eventually fill up, creatures who find themselves well before that point in time can estimate that they are very early relative to even very neutral reference classes.

It seems to me that something similar is going on when people claim that this coming century will be uniquely important, the most important one ever, as computers are the most powerful tech we have ever seen, and as the next century is plausibly when we will make most of the big choices re how to use computers.  If we generally make the most important choices about each new tech soon after finding it, and if increasingly powerful new techs keep appearing, then this sort of situation should be common, not unique, in history.

So this next century will only be the most important one (in this way) if computers are the last tech to appear that is more powerful than prior techs. But it we expect that even more important techs will continue to be found, then we shouldn’t expect this one to be the most important tech ever. No, I can’t describe these more important yet-to-be-found future techs. But I do believe they exist.

GD Star Rating
loading...
Tagged as: , ,

Space Econ HowTo

In Age of Em, I tried to show how far one could get using standard econ analysis to predict the social consequences of a particular envisioned future tech. The answer: a lot further that futurists usually go. Thus we could do a lot more useful futurism.

My approach to futurism should work more generally, and I’ve hoped to inspire others to emulate it. And space is an obvious application. We understand space tech pretty well, and people have been speculating about it for quite a long time. So I’m disappointed to not yet see better social analysis of space futures.

In this post I will therefore try to outline the kind of work that I think should be done, and that seems quite feasible. Oh I’m not going to actually do most of that work here, just outline it. This is just one blog post, after all. (Though I’m open to teaming with others on such a project.)

Here is the basic approach:

  1. Describe how a space society generally differs from others using economics-adjacent concepts. E.g., “Space econ is more X-like”.
  2. For each X, describe in general how X-like economies differ from others, using both historical patterns and basic econ theory.
  3. Merge the implications of X-analyses from the different X into a single composite picture of space.

Here are some candidates Xs, i.e., ways that space econs tend to differ from other econs. Note that we don’t need these various X to be logically independent of one another. But the more dependencies, the more work we will have to do in step 3 to sort those out.

First, space is further away than is most stuff. Which makes activity there less dense. So we first want to ask: how does economic and social activity tend to differ as it becomes further away from, and less dense than, the rest of the economy? E.g., in terms of distance, travel and communication cost and time, and having a different mix of resources, risks, and products? If lower density induces less local product and service variety, then how do less varied economies differ?

Space also seems different in being a harsher environment. On Earth today, some places are more like the Edens where humans first evolved, and so are less harsh for humans, while other places are more harsh. Such as high in mountains, on or under the sea, or in extreme latitudes. How does econ activity tend to differ in harsher environments? Harsh environments tend to be correlated with less natural biological activity; how does econ activity vary with that?

Space differs also in its basic attractions, relative to other places. One of those attractions is raw inputs, such as energy, atoms, and volume. Another attraction is that space contains more novelty, which attracts scientific and other adventurers. A third attraction is that space has often been a focal place to stage demonstrations of power and ability. Such as in the famous Cold War space race.

A fourth attraction is that growth in space seems to open up more potential for further growth in similar directions. In contrast perhaps to, for example, colonizing tops of mountains when there are only a limited number of such mountains available. How does the potential for further growth of a similar sort influence activity in an area? A fifth attraction is that doing things in space seems a complement to our large legacy of fiction set in space. For each of these attractions, we can ask: in general how does activity driven by such attractions differ from other activity?

Regarding “how does activity differ?”, here are some features Y that one might ask about. How capital intensive is activity? How automated? How long are supply chains? What disasters hit how hard with what frequency? What are typical mixes of genders, ages, and education levels? In what size firms, with how many layers of management, is commercial activity done? How fast do firms last, and how fast do they grow? How many different kinds of jobs are there, and how long are job tenures? How much commitment do firms demand from employees and how easy is it to move to a competing firm in a similar role? How easy is it to move where you live or shop?

In these kinds of societies does growth tend to happen slowly, continuously, in an uncoordinated manner? Or are there instead big gains to actors coordinating to all grow together in a big lump at related places and times? If so, who usually coordinates such lumps, and how do they get paid for it?

These are just a few examples of a long list of questions that economists and other social scientists often ask about different kinds of social activity. I’m not suggesting that one try hard to address how Y differs regarding X-like areas, for every possible combination of X and Y. I’m instead suggesting that one be opportunistic, searching in that big space for easy wins. For where we have empirical data, or simple theory, that gives tentative answers. As I did in Age of Em.

While the above can help us guess how a space economy will differ, we might also want to guess how fast it will grow. So we’d like a past time series and perhaps supporting theory to help predict how fast travel and other costs will fall, and how fast activity expands with falling costs.

GD Star Rating
loading...
Tagged as: ,

Why Not Wait On AI Risk?

Years ago when the AI risk conversation was just starting, I was a relative skeptic, but I was part of the conversation. Since then, the conversation has become much larger, but I seem no longer part of it; it seems years since others in this convo engaged me on it.

Clearly most who write on this do not sit close to my views, though I may sit closer to most who’ve considered getting into this topic, but instead found better things to do. (Far more resources are available to support advocates than skeptics.) So yes, I may be missing something that they all get. Furthermore, I’ve admittedly only read a small fraction of the huge amount since written in this area. Even so, I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

The future AI scenario that treats “AI” most like prior wide tech categories (e.g., “energy” or “transport”) goes as follows. AI systems are available from many competing suppliers at similar prices, and their similar abilities increase gradually over time. Abilities don’t increase faster than customers can usefully apply them. Problems are mostly dealt with as they appear, instead of anticipated far in advance. Such systems slowly displace humans on specific tasks, and are on average roughly as task specialized as humans are now. AI firms distinguish themselves via the different tasks their systems do.

The places and groups who adopt such systems first are those flexible and rich enough to afford them, and having other complementary capital. Those who invest in AI capital on average gain from their investments. Those who invested in displaced capital may lose, though over the last two decades workers at more automated jobs have not seen any average effect on their wages or number of workers. AI today is only a rather minor contribution to our economy (<5%), and it has quite a long way to go before it can make a large contribution. We today have only vague ideas of what AIs that made a much larger contribution would look like.

Today most of the ways that humans help and harm each other are via our relations. Such as: customer-supplier, employer-employee, citizen-politician, defendant-plaintiff, friend-friend, parent-child, lover-lover, victim-criminal-police-prosecutor-judge, army-army, slave-owner, and competitors. So as AIs replace humans in these roles, the main ways that AIs help and hurt humans are likely to also be via these roles.

Our usual story is that such hurt is limited by competition. For example, each army is limited by all the other armies that might oppose it. And your employer and landlord are limited in exploiting you by your option to switch to other employers and landlords. So unless AI makes such competition much less effective at limiting harms, it is hard to see how AI makes role-mediated harms worse. Sure smart AIs might be smarter than humans, but they will have other AI competitors and humans will have AI advisors. Humans don’t seem much worse off in the last few centuries due to firms and governments who are far more intelligent than individual humans taking over many roles.

AI risk folks are especially concerned with losing control over AIs. But consider, for example, an AI hired by a taxi firm to do its scheduling. If such an AI stopped scheduling passengers to be picked up where they waited and delivered to where they wanted to go, the firm would notice quickly, and could then fire and replace this AI. But what if an AI who ran such a firm became unresponsive to its investors. Or if an AI who ran an army becoming unresponsive to its oversight government? In both cases, while such investors or governments might be able to cut off some outside supplies of resources, the AI might do substantial damage before such cutoffs bled it dry.

However, our world today is well acquainted with the prospect of “coups” wherein firm or army management becomes unresponsive to its relevant owners. Not only do our usual methods usually seem sufficient to the task, we don’t see much of an externality re these problems. You try to keep your firm under control, and I try to keep mine, but I’m not especially threatened by your losing control of yours. We care a bit more about others losing control of their cars, planes, or nuclear power plants, as those might hurt bystanders. But we care much less once such others show us sufficient liability, and liability insurance, to cover our losses in these cases.

I don’t see why I should be much more worried about your losing control of your firm, or army, to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been the key to controlling past systems, and money invested now, instead of spent on analysis now, gives us far more money to spend on analysis later.

All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because, hey, anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why they expect advances at the scope of AGI to be far more lumpy than we’ve seen for most techs.

Yudkowsky paints a “foom” picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI with very powerful general abilities appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that all AGIs naturally coordinate to merge into a single system to defeat competition-based checks and balances.

These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. These would be big breakthroughs relative to the broad category of “mental tasks”, and thus even bigger than if we found big breakthroughs relative to the less broad tech categories of “energy”, “transport”, or “shelter”. Yes of course change is often lumpy if we look at small tech scopes, but lumpy local changes aggregate into smoother change over wider scopes.

As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.

I don’t mind groups with small relative budgets exploring scenarios with proportionally small chances, but I lament such a large fraction of those willing to take the long term future seriously using this as their default AI scenario. And while I get why people like Yudkowsky focus on scenarios in which they fervently believe, I am honestly puzzled why so many AI risk experts seem to repudiate his extreme scenarios, and yet still see AI risk as a terribly important project to pursue right now. If AI isn’t unusually lumpy, then why are early efforts at AI control design especially valuable?

So far I’ve mentioned two widely expressed AI concerns. First, AIs may hurt human workers by displacing them, and second, AIs may start coups wherein they wrest control of some resources from their owners. A third widely expressed concern is that the world today may be stable, and contain value, only due to somewhat random and fragile configurations of culture, habits, beliefs, attitudes, institutions, values, etc. If so, our world may break if this stuff drifts out of a safe and stable range for such configurations. AI might be or facilitate such a change, and by helping to accelerate change, AI might accelerate the rate of configuration drift.

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or allowing many other specific transformative techs, like genetic engineering, fusion energy, social media, or space. Or other big social changes, like gay marriage.

Many have deep and reasonable fears regarding big long-term changes. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me much more about change in general than about AI in particular. People focused on these concerns should be looking to stop or greatly limit and slow change in general, and not focus so much on AI. Big change can also happen without AI.

So what am I missing? Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?

GD Star Rating
loading...
Tagged as: , ,

Will Design Escape Selection?

In the past, many people and orgs have had plans and designs, many which made noticeable differences to the details of history. But regarding most of history, our best explanations of overall trends has been in terms of competition and selection, including between organisms, species, cultures, nations, empires, towns, firms, and political factions.

However, when it comes to the future, especially hopeful futures, people tend to think more in terms of design than selection. For example, H.G. Wells was willing to rely on selection to predict a future dystopia in The Time Machine, but his utopia in Things to Come was the result of conscious planning replacing prior destructive competition. Hopeful futurists have long painted pictures of shiny designed techs, planned cities, and wise cooperative institutions of charity and governance.

Today, competition and selection continue on in many forms, including political competition for the control of governance institutions. But instead of seeing governance, law, and regulation as driven largely by competition between units of governance (e.g., parties, cities, or nations), many now prefer to see them in design terms: good people coordinating to choose how we want to live together, and to limit competition in many ways. They see competition between units of governance as largely passé, and getting more so as we establish stronger global communities and governance.

My future analysis efforts have relied mostly on competition and selection. Such as in Age of Em, post-em AI, Burning the Cosmic Commons, and Grabby Aliens. And in my predictions of long views and abstract values. Their competitive elements, and what that competition produces, are often described by others as dystopian. And the most common long-term futurist vision I come across these days is of a “singleton” artificial general intelligence (A.G.I.) for whom competition and selection become irrelevant. In that vision (of which I am skeptical), there is only one A.G.I., which has no internal conflicts, grows in power and wisdom via internal reflection and redesign, and then becomes all powerful and immortal, changing the universe to match its value vision.

Many recent historical trends (e.g., slavery, democracy, religion, fertility, leisure, war, travel, art, promiscuity) can be explained in terms of rising wealth inducing a reversion to forager values and attitudes. And I see these design-oriented attitudes toward governance and the future as part of this pro-forager trend. Foragers didn’t overtly compete with each other, but instead made important decisions by consensus, and largely by appeal to community-wide altruistic goals. The farming world forced humans to more embrace competition, and become more like our pre-human ancestors, but we were never that comfortable with it.

The designs that foragers created, however, were too small to reveal the key obstacle to this vision of civilization-wide collective design to over-rule competition: rot (see 1 2 3 4). Not only is it quite hard in practice to coordinate to overturn the natural outcomes of competition and selection, the sorts of complex structures that we are tempted to use to achieve that purpose consistently rot, and decay with time. If humanity succeeds in creating world governance strong enough to manage competition, those governance structures are likely to prevent interstellar colonization, as that strongly threatens their ability to prevent competition. And such structures would slowly rot over time, eventually dragging civilization down with them.

If competition and selection manages to continue, our descendants may become grabby aliens, and join the other gods at the end of time. In that case one of the biggest unanswered question is: what will be the key units of future selection? How will those units manage to coordinate, to the extent that they do, while still avoiding the rotting of their coordination mechanisms? And how can we now best promote the rise of the best versions of such competing units?

GD Star Rating
loading...
Tagged as: , ,

Unblinding Our Admin Futures

Our job as futurists is to forecast the future. Not exactly of course, but at least to cut the uncertainty. And one of the simplest way to do that is to take relatively stable and robust past long term trends and project them into the future. Especially if those trends still have a long way that they could continue before they hit fundamental limits. For example, futurists have tried to apply this method to increasing incomes, leisure, variety, density, non-violence, automation, and ease of communication and transport.

It seems to me that one especially promising candidate for this method is also plausibly the fundamental cause of the industrial revolution: bureaucracy. For centuries we humans have been slowly learning how to manage larger more complex networks and organizations, via more formal roles, rules, and processes. (That is, we have more “admin”.) As a result, our orgs have been getting larger and have wider scope, governments have been doing more, and government functions have moved up to larger scale units (cities to states to nations, etc.).

For example, a twitter poll just found respondents saying 10-1 that the org they know best has been getting more, as opposed to less, bureaucratic over the last decade. And our laws have been getting consistently more complex.

If formal roles, rules, and processes increase over the next century as much as they have over the last century, that should make our future quite different from today. But how exactly? Yes, we’ll use computers more in admin, but that still leaves a lot unsaid. You might think science fiction would be all over this, describing our more admin future in great detail. Yet in fact, science fiction rarely describes much bureaucracy.

In fact, neither does fantasy, the other genre closest to science fiction. Actually, most stories avoid org complexity. For example, most movies and TV shows focus on leisure, instead of work. And when bureaucracy is included, it is usually as a soul-crushing or arbitrary-obstacle villain. It seems that we’d rather look away than acknowledge bureaucracy as a key source of our wealth and value, a pillar and engine of our civilization.

To try to see past this admin blindspot, let us try to find an area of life that today has relatively few formal rules and procedures, and then imagine adding a lot more of them there. This doesn’t necessarily mean that this area of life becomes more restricted and limited compared to today. But it does mean that whatever processes and restrictions there are become more formal and complex.

Public conversation comes to my mind as a potential example here. The rise of social media has created a whole lot more of it, and over the last few years many (including me) have been criticized for saying things the wrong way in public. The claim is often made that it is not the content of what they said that was the problem, it was the way that they said it. So many people say that we accept many complex rules of public conversation that are often being violated.

Thus I’m inclined to imagine a future where we have a lot more formal rules and processes regarding public conversations. These might not be seen as a limit on free speech, in that they only limit how you can say things, not what you can say. These rules might be complex enough to push us to pay for specialist advisors who help us navigate the new rules. Perhaps automation will make such advisors cheaper. And people of that era might prefer the relatively neutral and fair application of these complex rules to the more opportunistic and partisan ways that informal norms were enforced back in the day.

Now I’m not very confident that this is an area of life where we will get a lot more bureaucracy. But I am confident that there will be many such areas, and that we are so far greatly failing to imagine our more bureaucratic future. So please, I encourage you all to help us imagine what our more admin future may look like.

Added 11a: I’m about to attend an event whose dress code is “resort casual”. Whatever that means. I can imagine such dress rules getting a lot more explicit and complex.

GD Star Rating
loading...
Tagged as: ,

To Innovate, Unify or Fragment?

In the world around us, innovation seems to increase with the size of an integrated region of activity. For example, human and computer languages with more users acquire more words and tools at a faster rate. Tech ecosystems, such as those collected around Microsoft, Apple, or Google operating systems, innovate faster when they have more participating suppliers and users. And there is more innovation per capita in larger cities, firms, and economies. (All else equal, of course.)

We have decent theories to explain all this: larger communities try more things, and each trial has more previous things to combine and build on. The obvious implication is that innovation will increase as our world gets larger, more integrated, and adopts more wider-shared standards and tech ecosystems. More unification will induce more innovation.

Simple theory also predicts that species evolve faster when they have larger populations. And this seems to have applied across human history. But if this were generally true across species, then we should expect most biological innovation to happen in the largest species, which would live in the largest most integrated environmental niches. Like big common ocean areas. And most other species to have descended from these big ones.

But in fact, more biological innovation happens where the species are the smallest, which happens where mobility is higher and environments are more fragmented and changing. For example, over the last half billion years, we’ve seen a lot more innovation on land than in the sea, more on the coasts than on the interiors of land or sea, and more closer to rivers. All more mobile and fragmented places. How can that be?

Maybe big things tend to be older, and old things rot. Maybe the simple theory mentioned above focuses on many small innovations, but doesn’t apply as well to the few biggest innovations, that require coordinating many supporting innovations. Or maybe phenomena like sexual selection, as illustrated by the peacock’s tail, show how conformity and related collective traps can bedevil species, as well as larger more unified tech ecosystems. It seems to require selection between species to overcome such traps; individual species can’t fix them on their own.

If so, why hasn’t the human species fallen into such traps yet? Maybe the current fertility decline is evidence of such a trap, or maybe such problems just take a long time to arise. Humans fragmenting into competing cultures may have saved us for a while. Individual cultures do seem to have often fallen into such traps. Relatively isolated empires consistently rise and then fall. So maybe cultural competition is mostly what has saved us from cultures falling into traps.

While one might guess that collective traps are a rare problem for species and cultures, the consistent collapse of human empires and our huge dataset on bio innovation suggest that such problems are in fact quite common. So common that we really need larger scale competition, such as between cultures or species, to weed it out. To innovate, the key to growth, we need to fragment, not unify.

Which seems a big red loud warning sign about our current trend toward an integrated world culture, prey to integrated world collective traps, such as via world mobs. They might take some time to reveal themselves, but then be quite hard to eradicate. This seems to me the most likely future great filter step that we face.

Added 10Jan: There are papers on how to design a population structure to maximize the rate of biological evolution.

GD Star Rating
loading...
Tagged as: , ,

We Don’t Have To Die

You are mostly the mind (software) that runs on the brain (hardware) in your head; your brain and body are tools supporting your mind. If our civilization doesn’t collapse but instead advances, we will eventually be able to move your mind into artificial hardware, making a “brain emulation”. With an artificial brain and body, you could live an immortal life, a life as vivid and meaningful as your life today, where you never need feel pain, disease, grime, and your body always looks and feels young and beautiful. That person might not be exactly you, but they could (at first) be as similar to you as the 2001 version of you was to you today. I describe this future world of brain emulations in great detail in my book The Age of Em.

Alas, this scenario can’t work if your brain is burned or eaten by worms soon. But the info that specifies you is now only a tiny fraction of all the info in your brain and is redundantly encoded. So if we freeze all the chemical processes in your brain, either via plastination or liquid nitrogen, quite likely enough info can be found there to make a brain emulation of you. So “all” that stands between you and this future immortality is freezing your brain and then storing it until future tech improves.

If you are with me so far, you now get the appeal of “cryonics”, which over the last 54 years has frozen ~500 people when the usual medical tech gave up on them. ~3000 are now signed up for this service, and the [2nd] most popular provider charges $28K, though you should budget twice that for total expenses. (The 1st most popular charges $80K.) If you value such a life at a standard $7M, this price is worth it even if this process has only a 0.8% chance of working. Its worth more if an immortal life is worth more, and more if your loved ones come along with you.

So is this chance of working over 0.8%? Some failure modes seem to me unlikely: civilization collapses, frozen brains don’t save enough info, or you die in way that prevents freezing. And if billions of people used this service, there’d be a question of if the future is willing, able, and allowed to revive you. But with only a few thousand others frozen, that’s just not a big issue. All these risks together have well below a 50% chance, in my opinion.

The biggest risk you face then is organizational failure. And since you don’t have to pay them if they aren’t actually able to freeze you at the right time, your main risk re your payment is re storage. Instead of storing you until future tech can revive you, they might instead mismanage you, or go bankrupt, allowing you to thaw. This already happened at one cryonics org.

If frozen today, I judge your chance of successful revival to be at least 5%, making this service worth the cost even if you value such an immortal future life at only 1/6 of a standard life. And life insurance makes it easier to arrange the payment. But more important, this is a service where the reliability and costs greatly improve with more customers. With a million customers, instead of a thousand, I estimate cost would fall, and reliability would increase, each by a factor of ten.

Also, with more customers cryonics providers could afford to develop plastination, already demonstrated in research, into a practical service. This lets people be stored at room temperature, and thus ends most storage risk. Yes, with more customers, each might need to also pay to have future folks revive them, and to have something to live on once revived. But long time delays make that cheap, and so with enough customers total costs could fall to less than that of a typical funeral today. Making this a good bet for most everyone.

When the choice is between a nice funeral for Aunt Sally or having Aunt Sally not actually die, who will choose the funeral? And by buying cryonics for yourself, you also help move us toward the low cost cryonics world that would be much better for everyone. Most people prefer to extend existing lives over creating new ones.

Thus we reach the title claim of this post: if we coordinated to have many customers, it would be cheap for most everyone to not die. That is: most everyone who dies today doesn’t actually need to die! This is possible now. Ancient Egypt, relative rationalists among the ancients, paid to mummify millions, a substantial fraction of their population, and also a similar number of animals, in hope of later revival. But we now actually can mummify to allow revival, yet we have only done that to 500 people, over a period when over 4 billion people have died.

Why so few cryonics customers? When I’ve taught health economics, over 10% of students judge the chances of cryonics working to be high enough to justify a purchase. Yet none ever buy. In a recent poll, 31.5% of my followers said they planned to sign up, but few have. So the obstacle isn’t supporting beliefs, it is the courage to act on such beliefs. It looks quite weird to act on a belief in cryonics. So weird that spouses often divorce those who do. (But not spouses who spend a similar amounts to send their ashes into space, which looks much less weird.) We like to think we tolerate diversity, and we do for unimportant stuff, but for important stuff we in fact impose strongly penalize diversity.

Sure it would help if our official medical experts endorsed the idea, but they are just as scared of non-conformity, and also stuck on a broken concept of “science” which demands someone actually be revived before they can declare cryonics feasible. Non-medical scientists like that would insist we can’t say our sun will burn out until it actually does, or that rockets could take humans to Mars until a human actually stands on Mars. The fact that their main job is to prevent death and they could in fact prevent most death doesn’t weigh much on them relative to showing allegiance to a broken science concept.

Severe conformity pressures also seem the best explanation for the bizarre range of objections offered to cryonics, objections that are not offered re other ways to cut death rates. The most common objection offered is just that it seems “unnatural”. My beloved colleague Tyler said reducing your death rate this way is selfish, you might be tortured if you stay alive, and in an infinite multiverse you can never die. Others suggest that freezing destroys your soul, that it would hurt the environment, that living longer would slows innovation, that you might be sad to live in a world different from that of your childhood, or that it is immoral to buy products that not absolutely everyone can afford.

While I wrote a pretty similar post a year ago, I wrote this as my Christmas present to Alex Tabarrok, who requested this topic.

Added 17Dec: The chance the future would torture a revived you is related to the chance we would torture an ancient revived today:

Answers were similar re a random older person alive today. And people today are actually tortured far less often than this suggests, as we organize society to restrain random individual torture inclinations. We should expect the future to also organize to prevent random torture, including of revived cryonics patients.

Also, if their were millions of such revived people, they could coordinate to revive each other and to protect each other from torture. Torture really does seem a pretty minor issue here.

GD Star Rating
loading...
Tagged as: ,

Coming Commitment Conflicts

If competition, variation, and selection long continues, our worlds will become dominated by artificial creatures who take a long view of their future, and who see themselves as directly and abstractly valuing having more distant descendants. Is there anything more we robustly predict about them?

Our evolving descendants will form packages wherein each part of the package promotes reproduction of other package parts. So a big question is: how will they choose their packages? While some package choices will become very entrenched, like the different organs in our bodies, other choices may be freer to change at the last minute, like political coalitions in democracies. How will our descendants choose such coalition partners?

One obvious strategy is to make deals with coalition partners to promote each other’s long term reproduction. Some degree of commitment is probably optimal, and many technologies of commitment will likely be available. But note: it is probably possible to over-commit, by committing too wide a range of choices over too long a time period with too many partners, and to under-commit, committing too few choices over too short a time period with too few partners. Changed situations call for changed coalitions. Thus our descendants will have to think carefully about how strongly and long to commit on what with whom.

But is it even possible to enforce deals to promote the reproduction of a package? Sure, the amount of long-term reproduction of a set of features or a package subset seems a clearly measurable outcome, but how could such a team neutrally decide which actions best promote that overall package? Wouldn’t the detailed analyses that each package part offers on such a topic tend to be biased to favor those parts? If so, how could they find a neutral analyses to rely on?

My work on futarchy lets me say: this is a solvable problem. Because we know that futarchy would solve this. A coalition could neutrally but expertly decide what actions would promote their overall reproduction by choosing a specific ex-post-numeric-measure of their overall reproduction, and then creating decision markets to advise on each particular decision where concrete identifiable options can be found.

There may be other ways to do this, and some ways may even be better than decision markets. But it clearly is possible for future coalitions to neutrally and expertly decide what shared actions would promote their overall reproduction. So as long as they can make such actions visible to something like decisions markets, coalitions can reliably promote their joint reproduction.

Thus we can foresee an important future activity: forming and reforming reproduction coalitions.

GD Star Rating
loading...
Tagged as: ,