Monthly Archives: November 2018

Mars

A publicist recently emailed me: 

We are inviting select science and technology related press to view an early screening of Ron Howard and Brian Grazer’s MARS Season 2. The series premieres on November 12, however, we could email a screener to you then follow up with top interviews from the season. We’d just ask that you hold coverage until the week of Nov 7.

MARS is scripted, however, during each episodes, there are cut-aways to documentary style discussion by real scientists and thinkers who describe the reality of our endeavor to the red planet. The scripted aspect rigorously follows science and the latest in space travel technology.

Though I hadn’t heard of the show, I was flattered enough to accept this invitation. I have now watched both seasons, and today am allowed to give you my reactions. 

The branding by National Geographic, and the interleaving of fictional story with documentary interviews, both suggest a realistic story. Their “making of” episode also brags of realism. But while it is surely more realistic than most science fiction (alas, a low bar), it seemed to me substantially less realistic, and less entertaining, than the obvious comparison, the movie The Martian. The supposedly “rigorous” documentary parts don’t actually go into technical details (except in their extra “making of” episode); they just have big “Mars” names talking abstractly about emotional issues related to Mars colonization.  

As you might expect, the story contains way too many implausibly close calls. And others have pointed out technical inaccuracies. But let me focus on the economics.

First, they say near the end of the second season’s story that they have completed 22% of an orbiting mirror array, designed to melt the polar ice caps. From Wikipedia:

An estimated 120 MW-years of electrical energy would be required in order to produce mirrors large enough to vaporize the ice caps. … If all of this CO2 were put into the atmosphere, it would only double the current atmospheric pressure from 6 mbar to 12 mbar, amounting to about 1.2% of Earth’s mean sea level pressure. The amount of warming that could be produced today by putting even 100 mbar of CO2 into the atmosphere is small, roughly of order 10 K. (more)

From a recent NASA report:

There is not enough CO2 remaining on Mars to provide significant greenhouse warming were the gas to be put into the atmosphere; in addition, most of the COgas is not accessible and could not be readily mobilized. As a result, terraforming Mars is not possible using present-day technology. (more)

These mirrors are supposedly made on Mars out of materials dug up there, and then launched into orbit. Yet we only seem to see a few dozen people living on Mars, they’ve only been there ten years, and we never meet anyone actually working on making and launching mirrors. Yet such a project would be enormous, requiring vast resources and personnel. I can’t see how this small group could have fielded so many mirrors so fast, nor can I see the cost being worth such modest and slow increases in pressure and temperature, especially during the early colonization period.  

There is almost no discussion of the basic economics of this crazy expensive colonization effort. The first launches are paid for by an International Mars Science Foundation (IMSF), initially run by a very rich guy said to have put 90% of his wealth into it. Is this all charity, or does he get a return if things go well? Later we see mostly nations around a governing table, and public opinion seems very important, as if nations were paying, mainly to gain prestige. But the scale of all this seems huge compared to other things nations do together for prestige. 

The second season starts with the arrival on Mars of a for-profit firm, Lukrum, run by greedy men on Mars and Earth, while good-hearted women now run the IMSF on Mars and Earth. Lukrum consistently breaks agreements, grabs anything it can, takes unjustified risks with everyone’s lives, and otherwise acts badly. Yet, strangely, IMSF as a customer is the only plausible source of future revenue for Lukrum. So how do they expect to get a return on their huge investment if they treat their only possible customer badly? Apparently their plan is to just lobby the governments behind IMSF to have IMSF pay them off. As if lobbying was typically a great general investment strategy (it isn’t). 

Thus the entire second season is mostly a morality play on the evils of greedy firms. The documentary parts make it clear that this is to be taken as a lesson for today on global warming and the environment; for-profit firms are just not to be trusted and must be firmly under the control of scientists or governments who cannot possibly be lobbied by the for-profit firms. Scientists and governments can be trusted, unless they are influenced by for-profit firms. The only reason to include firms in any venture is if they’ve used their money to buy political power that you can’t ignore, or if a project needs more resources than dumb voters are willing to pay for. (Obviously, they think, the best solution is to nationalize everything, but often dumb voters won’t approve that either.)

All this in a story that brags about its scientific accuracy, and that breaks for interviews with “experts. But these are “experts” in Mars and environmental activism, not economics or political economy.  

For the record, as an economist let me say that a plausible reason to include for-profit firms on Mars, and elsewhere, is that they often have better incentives to actually satisfy customers. Yes, that’s a problem on Mars, because other than governments seeking prestige, there are not likely to be enough customers on Mars to satisfy anytime soon, as almost anything desired is much cheaper to make here on Earth. This includes not just exotic places to visit or move, but protection against human extinction.

Yes, things can go badly when corruptible governments subcontract to for-profit firms who lobby them. But that’s hardly a good general reason to dislike for-profit firms. Governments who can be corrupted by lobbying are also generally corruptible and inept in many other ways. Having such governments spend vast sums on prestige projects to impress ignorant voters and foreigners is not generally a good way to get useful stuff done. 

By the way, I’ve also watched the first season of The First, another TV series on Mars colonization. So far the show doesn’t seem much interested in Mars or its related politics, econ, or tech, compared to the personal relation dramas of its main characters. They have not at all explained why anyone is funding this Mars mission. I like its theme music though.

GD Star Rating
loading...
Tagged as: ,

Avoiding Blame By Preventing Life

If morality is basically a package of norms, and if norms are systems for making people behave, then each individual’s main moral priority becomes: to avoid blame. While the norm system may be designed to on average produce good outcomes, when that system breaks then each individual has only weak incentives to fix it. They mainly seek to avoid blame according to the current broken system. In this post I’ll discuss an especially disturbing example, via a series of four hypothetical scenarios.

1. First, imagine we had a tech that could turn ordinary humans into productive zombies. Such zombies can still do most jobs effectively, but they no longer have feelings or an inner life, and from the outside they also seem dead inside, lacking passion, humor, and liveliness. Imagine that someone proposed to use this tech on a substantial fraction of the human population. That is, they propose to zombify those who do jobs that others see as boring, routine, and low status, like collecting garbage, cleaning bedpans, or sweeping floors. As in this scenario living people would be turned into dead zombies, this proposal would probably be widely seen as genocide, and soundly rejected.

2. Second, imagine someone else proposes the following variation: when a new child of a parent seems likely enough to grow up to take such a low status job, this zombie tech is applied very early to the fetus. So no non-zombie humans are killed, they are just prevented from existing. Zombie kids are able to learn and eventually learn to do those low status. Thus technically this is not genocide, though it could be seen as the extermination of a class. And many parents would suffer from losing their chance to raise lively humans. Whoever proposed all this is probably considered evil, and their proposal rejected.

3. Third, imagine combining this proposal with another tech that can reliably induce identical twins. This will allow the creation of extra zombie kids. That is, each birth to low status parents is now of identical twins, one of which is an ordinary kid, and the other is a zombie kid. If parent’s don’t want to raise zombie kids, some other organization will take over that task. So now the parents get to have all their usual lively kids, and the world gains a bunch of extra zombie kids who grow up to do low status jobs. Some may support this proposal, but surely many others will find it creepy. I expect that it would be pretty hard to create a political consensus to support this proposal.

While in the first scenario people were killed, and in the second scenario parents were deprived, this third scenario is designed to take away these problems. But this third proposal still has two remaining problems. First, if we have a choice between creating an empty zombie and a living feeling person who finds their life worth living, this second option seems to result in a better world. Which argues against zombies. Second, if zombies seem like monsters, supporters of this proposal might might be blamed for creating monsters. And as the zombies look a lot like humans, many will see you as a bad person if you seem inclined to or capable of treating them badly. It looks bad to be willing to create a lower class, and to treat them like a disrespected lower class, if that lower class looks a lot like humans. So by supporting this third proposal, you risk being blamed.

4. My fourth and last scenario is designed to split apart these two problems with the third scenario, to make you choose which problem you care more about. Imagine that robots are going to take over most all human jobs, but that we have a choice about which kind of robot they are. We could choose human-like robots, who act lively with passion and humor, and who inside have feelings and an inner life. Or we could choose machine-like robots, who are empty inside and also look empty on the outside, without passion, humor, etc.

If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice. In that choice the creatures you create look so little like humans that few will blame you for creating such creatures, or for treating them badly.

I recently ran a 24 hour poll on Twitter about this choice, a poll to which 700 people responded. Of those who make a choice, 77% picked the machine-like robots:

Maybe my Twitter followers are unusual, but I doubt that a majority of a more representative poll would pick the human-like option. Instead, I think most people prefer the option that avoids personal blame, even if it makes for a worse world.

GD Star Rating
loading...
Tagged as: , , ,

Long Views Are Coming

One useful way to think about the future is to ask what key future dates are coming, and then to think about in what order they may come, in what order we want them, and how we might influence that order. Such key dates include extinction, theory of everything found, innovation runs out, exponential growth slows down, and most bio humans unemployed. Many key dates are firsts: alien life or civilization found, world government founded, off-Earth self-sufficient colony, big nuclear war, immortal born, time machine made, cheap emulations, and robots that can cheaply replace most all human workers. In this post, I want to highlight another key date, one that is arguably as important as any of the above: the day when the dominant actors take a long view.

So far history can be seen as a fierce competition by various kinds of units (including organisms, genes, and cultures) to control the distant future. Yet while this has resulted in very subtle and sophisticated behavior, almost all this behavior is focused on the short term. We see this in machine learning systems; even when they are selected to achieve end-of-game outcomes, they much prefer to do this via current behaviors that react to current stimuli. It seems to just be much harder to successfully plan on longer timescales.

Animal predators and prey developed brains to plan over short sections of a chase or fight. Human foragers didn’t plan much longer than that, and it took a lot of cultural selection to get human farmers to plan on the scale of a year, e.g., to save grain for winter eating and spring seeds. Today human organizations can consistently manage modest plans on the scale of a few years, but we fail badly when we try much larger or longer plans.

Arguably, competition and evolution will continue to select for units capable of taking longer views. And so if competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate.

And this seems a very good thing! Arguably, the biggest thing that goes wrong in the world today is that we fail to take a long view. Because we fail to much consider the long run in our choices, we put a vast great future at risk, such as by tolerating avoidable existential risks. This will end once the dominant units take a long view. At that point there may be fights on which direction the future should take, and coordination failures may lead to bad outcomes, but at least the future will not be neglected.

The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates. And working to hasten that day could be one of the most important things we can do to help the future. So I hereby call to action those who (say they) care about the distant future to help in this task.

A great feature of this task is that it doesn’t require great coordination; it is a “race to the top”. That is, it is in the interest of each cultural unit (nation, language, ethnicity, religion, city, firm, family, etc.) to figure out how to take effective long term views. So you can help the world by allying with a particular unit and helping it learn to take an effective long term view. You don’t have to choose between “selfishly” helping your unit, or helping the world as a whole.

One way to try to promote longer term views is to promote longer human lifespans. Its not that clear to me this works, however, as even immortals can prioritize the short run. And extending lifespans is very hard. But it is a fine goal in any case.

A bad way to encourage long views is to just encourage the adoption of plans that advocates now claim are effective ways to help in the long run. After all, it seems that one of the main obstacles so far to taking long views is the typical low quality of long-term plans offered. Instead, we must work to make long term planning processes more reliable.

My guess is that a key problem is worse incentives and accountability for those who make long term predictions, and who propose and implement longer term plans. If your five year plan goes wrong, that could wreck your career, but you might make a nice long comfy career our of a fifty year plan that will later go wrong. So we need to devise and test new ways to create better incentives for long term predictions and plans.

You won’t be surprised to hear me say I think prediction markets have promise as a way to create better incentives and accountability. But we haven’t experimented that much with long-term prediction markets, and they have some potential special issues, so there’s a lot of work to do to explore this approach.

Once we find ways to make more reliable long term plans, we will still face the problem that organizations are typically under the control of humans, who seem to consistently act on short term views. In my Age of Em scenario, this could be solved by having slower ems control long term choices, as they would naturally have longer term views. Absent ems, we may want to experiment with different cultural contexts for familiar ordinary humans, to see which can induce such humans to prioritize the long term.

If we can’t find contexts that make ordinary humans take long term views, we may want to instead create organizations with longer term views. One approach would be to release enough of them from tight human controls, and subject them to selection pressures that reward better long term views. For example, evolutionary finance studies what investment organizations free to reinvest all their assets would look like if they were selected for their ability to grow assets well.

Some will object to the creation of powerful entities whose preferences disagree with those of familiar humans alive at the time. And I admit that gives me pause. But if taken strictly that attitude seems to require that the future always remain neglected, if ordinary humans discount the future. I fear that may be too high a price to pay.

GD Star Rating
loading...
Tagged as: ,

Long Legacies And Fights In A Competitive Universe

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

  1. I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.
  2. I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term known projects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

GD Star Rating
loading...
Tagged as: , , ,