Social Media Lessons

Women consistently express more interest than men in stories about weather, health and safety, natural disasters and tabloid news. Men are more interested than women in stories about international affairs, Washington news and sports. (more)

Tabloid newspapers … tend to be simply and sensationally written and to give more prominence than broadsheets to celebrities, sports, crime stories, and even hoaxes. They also take political positions on news stories: ridiculing politicians, demanding resignations, and predicting election results. (more

Two decades ago, we knew nearly as much about computers, the internet, and the human and social sciences as we do today. In principle, this should have let us foresee broad trends in computer/internet applications to our social lives. Yet we seem to have been surprised by many aspects of today’s “social media”. We should take this as a chance to learn; what additional knowledge or insight would one have to add to our views from two decades ago to make recent social media developments not so surprising?

I asked this question Monday night on twitter and no one pointed me to existing essays on the topic; the topic seems neglected. So I’ve been pondering this for the last day. Here is what I’ve come up with.

Some people did use computers/internet for socializing twenty years ago, and those applications do have some similarities to applications today. But we also see noteworthy differences. Back then, a small passionate minority of mostly young nerdy status-aspiring men sat at desks in rare off hours to send each other text, via email and topic-organized discussion groups, as on Usenet. They tended to talk about grand big topics, like science and international politics, and were often combative and rude to each other. They avoided centralized systems to participate in many decentralized versions, using separate identities; it was hard to see how popular was any one person across all these contexts.

In today’s social media, in contrast, most everyone is involved, text is more often displaced by audio, pictures, and video, and we typically use our phones, everywhere and at all times of day. We more often forward what others have said rather than saying things ourselves, the things we forward are more opinionated and less well vetted, and are more about politics, conflict, culture, and personalities. Our social media talk is also more in these directions, is more noticeably self-promotion, and is more organized around our personal connections in more centralized systems. We have more publicly visible measures of our personal popularity and attention, and we frequently get personal affirmations of our value and connection to specific others. As we talk directly more via text than voice, and date more via apps than asking associates in person, our social interactions are more documented and separable, and thus protect us more from certain kinds of social embarrassment.

Some of these changes should have been predictable from lower costs of computing and communication. Another way to understand these changes is that the pool of participants changed, from nerdy young men to everyone. But the best organizing principle I can offer is: social media today is more lowbrow than the highbrow versions once envisioned. While over the 1800s culture separated more into low versus high brow, over the last century this has reversed, with low has been displacing high, such as in more informal clothes, pop music displacing classical, and movies displacing plays and opera. Social media is part of this trend, a trend that tech advocates, who sought higher social status for themselves and their tech, didn’t want to see.

TV news and tabloids have long been lower status than newspapers. Text has long been higher status than pictures, audio, and video. More carefully vetted news is higher status, and neutral news is higher status than opinionated rants. News about science and politics and the world is higher status that news about local culture and celebrities, which is higher status than personal gossip. Classic human norms against bragging and self-promotion reduce the status of those activities and of visible indicators of popularity and attention.

The mostly young male nerds who filled social media two decades ago and who tried to look forward envisioned high brow versions made for people like themselves. Such people like to achieve status by sparring in debates on the topics that fill high status traditional media. As they don’t like to admit they do this for status, they didn’t imagine much self-promotion or detailed tracking of individual popularity and status. And as they resented loss of privacy and strong concentrations of corporate power, and they imagined decentralized system with effectively anonymous participants.

But in fact ordinary people don’t care as much about privacy and corporate concentration, they don’t as much mind self-promotion and status tracking, they are more interested in gossip and tabloid news than high status news, they care more about loyalty than neutrality, and they care more about gaining status via personal connections than via grand-topic debate sparring. They like wrestling-like bravado and conflict, are less interested in accurate vetting of news sources, like to see frequent personal affirmations of their value and connection to specific others, and fear being seen as lower status if such things do not continue at a sufficient rate.

This high to lowbrow account suggests a key question for the future of social media: how low can we go? That is, what new low status but commonly desired social activities and features can new social media offer? One candidate that occurs to me is: salacious gossip on friends and associates. I’m not exactly sure how it can be implemented, but most people would like to share salacious rumors about associates, perhaps documented via surveillance data, in a way that allows them to gain relevant social credit from it while still protecting them from being sued for libel/slander when rumors are false (which they will often be), and at least modestly protecting them from being verifiably discovered by their rumor’s target. That is, even if a target suspects them as the source, they usually aren’t sure and can’t prove it to others. I tentatively predict that eventually someone will make a lot of money by providing such a service.

Another solid if less dramatic prediction is that as social media spreads out across the world, it will move toward the features desired by typical world citizens, relative to features desired by current social media users.

Added 17 Nov: I wish I had seen this good Arnold Kling analysis before I wrote the above.

GD Star Rating
loading...
Tagged as: , ,

Can You Outsmart An Economist?

Steven Landsburg’s new book, Can You Outsmart An Economist?, discusses many interesting questions. For example, in this nice and real example, median wages for all workers only rose 3% from 1980-2005, yet they rose 15% or more for each race/sex subgroup. Because the relative group sizes changed:

Taking the book title as a challenge, however, I have to point out the one place where I disagreed with the book. Landsburg says:

In a recent five-year period on the Maryland stretch of I-95, a black motorist was three times as likely as a white motorist to be stopped and searched for drugs. Black motorists were found to be carrying drugs at pretty much exactly the same rate as whites. (A staggeringly high one-third of stopped blacks and the same staggeringly high one-third of stopped whites were caught with drugs in their cars.) This was widely reported in the news media as clear-cut evidence of racial discrimination. … If you believe that people respond to incentives, then you must believe that if blacks were stopped at the same lower rate that whites were, more of them would have carried drugs. …

If [police] were single-mindedly out to maximize arrests, they’d start by focusing their attention on the group that’s most inclined to carry drugs—in this case, blacks. … If blacks are still carrying more drugs than whites, the police shift even more of their focus to blacks, leading the gap to close a bit more. This continues until whites and blacks are carrying drugs in equal proportions. … If you want to maximize deterrence, you’ll concentrate more on stopping whites, because there are more whites in the population to deter, … which would deter more whites from carrying drugs—and then the average white motorist would carry fewer drugs than the average black.

I’m with him until that last sentence. I think he is assuming that each choice to carry drugs or not is chosen independently, that choice is deterred independently via a perceived chance of being stopped, that potential carriers know only the average chance that someone in their groups is stopped, and that police can’t usefully vary the stopping chance within groups.

If a perceived stopping chance could be chosen independently for each individual, then to maximize deterrence overall that chance would be set somewhat differently for each individual, according to their differing details. But the constraint that everyone in a group must share the same perceived stopping chance will prevent this detailed matching, making it a bit harder to deter drug carrying in that group. This is a reason that, all else equal, police motivated by deterrence may try a little less harder to deter larger groups, who are harder to deter, because they have more internal variation.

Landsburg instead argues that you’ll put more effort into deterring the larger group, apparently just because there is a larger overall benefit from deterring a larger group. Yes, of course, deterring a group twice as large could produce twice the deterrent benefit in terms of its effect on the overall drug-carrying crime rate. But that comes at twice the cost in terms of twice as many traffic stops. I don’t see how there is a larger benefit relative to cost from focusing deterrence efforts on larger groups.

GD Star Rating
loading...
Tagged as:

How To Fund Prestige Science

How can we best promote scientific research? (I’ll use “science” broadly in this post.) In the usual formulation of the problem, we have money and status that we could distribute, and they have time and ability that they might apply. They know more than we do, but we aren’t sure who is how good, and they may care more about money and status than about achieving useful research. So we can’t just give things to anyone who claims they would use it to do useful science. What can we do? We actually have many options. Continue reading "How To Fund Prestige Science" »

GD Star Rating
loading...
Tagged as: , ,

Non-Conformist Influence

Here is a simple model that suggests that non-conformists can have more influence than conformists.

Regarding a one dimensional choice x, let each person i take a public position xi, and let the perceived mean social consensus be m = Σiwixi, where wi is the weight that person i gets in the consensus. In choosing their public position xi, person i cares about getting close to both their personal ideal point ai and to the consensus m, via the utility function

Ui(xi) = -ci(xi-ai)2 – (1-ci)(xi-m)2.

Here ci is person i’s non-conformity, i.e., their willingness to have their public position reflect their personal ideal point, relative to the social consensus. When each person simultaneously chooses their xi while knowing all of the ai,wi,ci, the (Nash) equilibrium consensus is

m = Σi wiciai (ci + (1-ci)(1-wi))-1 (1- Σjwj(1-cj)(1-wj)/(cj + (1-cj)(1-wj)))-1

If each wi<<1, then the relative weight that each person gets in the consensus is close to wiciai. So how much their ideal point ai counts is roughly proportional to their non-conformity ci times their weight wi. So all else equal, non-conformists have more influence over the consensus.

Now it is possible that others will reduce the weight wi that they give the non-conformists with high ci in the consensus. But this is hard when ci is hard to observe, and as long as this reduction is not fully (or more than fully) proportional to their increased non-confomity, non-conformists continue to have more influence.

It is also possible that extremists, who pick xi that deviate more from that of others, will be directly down-weighted. (This happens in the weights wi=k/|xi-xm| that produce a median xm, for example.) This makes more sense in the more plausible situation where xi,wi are observable but ai,ci are not. In this case, it is the moderate non-conformists, who happen to agree more with others, who have the most influence.

Note that there is already a sense in which, holding constant their weight wi, an extremist has a disproportionate influence on the mean: a 10 percent change in the quantity xi – m changes the consensus mean m twice as much when that quantity xi – m is twice as large.

GD Star Rating
loading...
Tagged as: ,

Mars

A publicist recently emailed me: 

We are inviting select science and technology related press to view an early screening of Ron Howard and Brian Grazer’s MARS Season 2. The series premieres on November 12, however, we could email a screener to you then follow up with top interviews from the season. We’d just ask that you hold coverage until the week of Nov 7.

MARS is scripted, however, during each episodes, there are cut-aways to documentary style discussion by real scientists and thinkers who describe the reality of our endeavor to the red planet. The scripted aspect rigorously follows science and the latest in space travel technology.

Though I hadn’t heard of the show, I was flattered enough to accept this invitation. I have now watched both seasons, and today am allowed to give you my reactions. 

The branding by National Geographic, and the interleaving of fictional story with documentary interviews, both suggest a realistic story. Their “making of” episode also brags of realism. But while it is surely more realistic than most science fiction (alas, a low bar), it seemed to me substantially less realistic, and less entertaining, than the obvious comparison, the movie The Martian. The supposedly “rigorous” documentary parts don’t actually go into technical details (except in their extra “making of” episode); they just have big “Mars” names talking abstractly about emotional issues related to Mars colonization.  

As you might expect, the story contains way too many implausibly close calls. And others have pointed out technical inaccuracies. But let me focus on the economics.

First, they say near the end of the second season’s story that they have completed 22% of an orbiting mirror array, designed to melt the polar ice caps. From Wikipedia:

An estimated 120 MW-years of electrical energy would be required in order to produce mirrors large enough to vaporize the ice caps. … If all of this CO2 were put into the atmosphere, it would only double the current atmospheric pressure from 6 mbar to 12 mbar, amounting to about 1.2% of Earth’s mean sea level pressure. The amount of warming that could be produced today by putting even 100 mbar of CO2 into the atmosphere is small, roughly of order 10 K. (more)

From a recent NASA report:

There is not enough CO2 remaining on Mars to provide significant greenhouse warming were the gas to be put into the atmosphere; in addition, most of the COgas is not accessible and could not be readily mobilized. As a result, terraforming Mars is not possible using present-day technology. (more)

These mirrors are supposedly made on Mars out of materials dug up there, and then launched into orbit. Yet we only seem to see a few dozen people living on Mars, they’ve only been there ten years, and we never meet anyone actually working on making and launching mirrors. Yet such a project would be enormous, requiring vast resources and personnel. I can’t see how this small group could have fielded so many mirrors so fast, nor can I see the cost being worth such modest and slow increases in pressure and temperature, especially during the early colonization period.  

There is almost no discussion of the basic economics of this crazy expensive colonization effort. The first launches are paid for by an International Mars Science Foundation (IMSF), initially run by a very rich guy said to have put 90% of his wealth into it. Is this all charity, or does he get a return if things go well? Later we see mostly nations around a governing table, and public opinion seems very important, as if nations were paying, mainly to gain prestige. But the scale of all this seems huge compared to other things nations do together for prestige. 

The second season starts with the arrival on Mars of a for-profit firm, Lukrum, run by greedy men on Mars and Earth, while good-hearted women now run the IMSF on Mars and Earth. Lukrum consistently breaks agreements, grabs anything it can, takes unjustified risks with everyone’s lives, and otherwise acts badly. Yet, strangely, IMSF as a customer is the only plausible source of future revenue for Lukrum. So how do they expect to get a return on their huge investment if they treat their only possible customer badly? Apparently their plan is to just lobby the governments behind IMSF to have IMSF pay them off. As if lobbying was typically a great general investment strategy (it isn’t). 

Thus the entire second season is mostly a morality play on the evils of greedy firms. The documentary parts make it clear that this is to be taken as a lesson for today on global warming and the environment; for-profit firms are just not to be trusted and must be firmly under the control of scientists or governments who cannot possibly be lobbied by the for-profit firms. Scientists and governments can be trusted, unless they are influenced by for-profit firms. The only reason to include firms in any venture is if they’ve used their money to buy political power that you can’t ignore, or if a project needs more resources than dumb voters are willing to pay for. (Obviously, they think, the best solution is to nationalize everything, but often dumb voters won’t approve that either.)

All this in a story that brags about its scientific accuracy, and that breaks for interviews with “experts. But these are “experts” in Mars and environmental activism, not economics or political economy.  

For the record, as an economist let me say that a plausible reason to include for-profit firms on Mars, and elsewhere, is that they often have better incentives to actually satisfy customers. Yes, that’s a problem on Mars, because other than governments seeking prestige, there are not likely to be enough customers on Mars to satisfy anytime soon, as almost anything desired is much cheaper to make here on Earth. This includes not just exotic places to visit or move, but protection against human extinction.

Yes, things can go badly when corruptible governments subcontract to for-profit firms who lobby them. But that’s hardly a good general reason to dislike for-profit firms. Governments who can be corrupted by lobbying are also generally corruptible and inept in many other ways. Having such governments spend vast sums on prestige projects to impress ignorant voters and foreigners is not generally a good way to get useful stuff done. 

By the way, I’ve also watched the first season of The First, another TV series on Mars colonization. So far the show doesn’t seem much interested in Mars or its related politics, econ, or tech, compared to the personal relation dramas of its main characters. They have not at all explained why anyone is funding this Mars mission. I like its theme music though.

GD Star Rating
loading...
Tagged as: ,

Avoiding Blame By Preventing Life

If morality is basically a package of norms, and if norms are systems for making people behave, then each individual’s main moral priority becomes: to avoid blame. While the norm system may be designed to on average produce good outcomes, when that system breaks then each individual has only weak incentives to fix it. They mainly seek to avoid blame according to the current broken system. In this post I’ll discuss an especially disturbing example, via a series of four hypothetical scenarios.

1. First, imagine we had a tech that could turn ordinary humans into productive zombies. Such zombies can still do most jobs effectively, but they no longer have feelings or an inner life, and from the outside they also seem dead inside, lacking passion, humor, and liveliness. Imagine that someone proposed to use this tech on a substantial fraction of the human population. That is, they propose to zombify those who do jobs that others see as boring, routine, and low status, like collecting garbage, cleaning bedpans, or sweeping floors. As in this scenario living people would be turned into dead zombies, this proposal would probably be widely seen as genocide, and soundly rejected.

2. Second, imagine someone else proposes the following variation: when a new child of a parent seems likely enough to grow up to take such a low status job, this zombie tech is applied very early to the fetus. So no non-zombie humans are killed, they are just prevented from existing. Zombie kids are able to learn and eventually learn to do those low status. Thus technically this is not genocide, though it could be seen as the extermination of a class. And many parents would suffer from losing their chance to raise lively humans. Whoever proposed all this is probably considered evil, and their proposal rejected.

3. Third, imagine combining this proposal with another tech that can reliably induce identical twins. This will allow the creation of extra zombie kids. That is, each birth to low status parents is now of identical twins, one of which is an ordinary kid, and the other is a zombie kid. If parent’s don’t want to raise zombie kids, some other organization will take over that task. So now the parents get to have all their usual lively kids, and the world gains a bunch of extra zombie kids who grow up to do low status jobs. Some may support this proposal, but surely many others will find it creepy. I expect that it would be pretty hard to create a political consensus to support this proposal.

While in the first scenario people were killed, and in the second scenario parents were deprived, this third scenario is designed to take away these problems. But this third proposal still has two remaining problems. First, if we have a choice between creating an empty zombie and a living feeling person who finds their life worth living, this second option seems to result in a better world. Which argues against zombies. Second, if zombies seem like monsters, supporters of this proposal might might be blamed for creating monsters. And as the zombies look a lot like humans, many will see you as a bad person if you seem inclined to or capable of treating them badly. It looks bad to be willing to create a lower class, and to treat them like a disrespected lower class, if that lower class looks a lot like humans. So by supporting this third proposal, you risk being blamed.

4. My fourth and last scenario is designed to split apart these two problems with the third scenario, to make you choose which problem you care more about. Imagine that robots are going to take over most all human jobs, but that we have a choice about which kind of robot they are. We could choose human-like robots, who act lively with passion and humor, and who inside have feelings and an inner life. Or we could choose machine-like robots, who are empty inside and also look empty on the outside, without passion, humor, etc.

If you are focused on creating a better world, you’ll probably prefer the human-like robots, as that which choice results in more creatures who find their lives worth living. But if you are focused on avoiding blame, you’ll probably prefer the machine-like robots, as few will blame you for for that choice. In that choice the creatures you create look so little like humans that few will blame you for creating such creatures, or for treating them badly.

I recently ran a 24 hour poll on Twitter about this choice, a poll to which 700 people responded. Of those who make a choice, 77% picked the machine-like robots:

Maybe my Twitter followers are unusual, but I doubt that a majority of a more representative poll would pick the human-like option. Instead, I think most people prefer the option that avoids personal blame, even if it makes for a worse world.

GD Star Rating
loading...
Tagged as: , , ,

Long Views Are Coming

One useful way to think about the future is to ask what key future dates are coming, and then to think about in what order they may come, in what order we want them, and how we might influence that order. Such key dates include extinction, theory of everything found, innovation runs out, exponential growth slows down, and most bio humans unemployed. Many key dates are firsts: alien life or civilization found, world government founded, off-Earth self-sufficient colony, big nuclear war, immortal born, time machine made, cheap emulations, and robots that can cheaply replace most all human workers. In this post, I want to highlight another key date, one that is arguably as important as any of the above: the day when the dominant actors take a long view.

So far history can be seen as a fierce competition by various kinds of units (including organisms, genes, and cultures) to control the distant future. Yet while this has resulted in very subtle and sophisticated behavior, almost all this behavior is focused on the short term. We see this in machine learning systems; even when they are selected to achieve end-of-game outcomes, they much prefer to do this via current behaviors that react to current stimuli. It seems to just be much harder to successfully plan on longer timescales.

Animal predators and prey developed brains to plan over short sections of a chase or fight. Human foragers didn’t plan much longer than that, and it took a lot of cultural selection to get human farmers to plan on the scale of a year, e.g., to save grain for winter eating and spring seeds. Today human organizations can consistently manage modest plans on the scale of a few years, but we fail badly when we try much larger or longer plans.

Arguably, competition and evolution will continue to select for units capable of taking longer views. And so if competition continues for long enough, eventually our world should contain units that do care about the distant future, and are capable of planning effectively over long timescales. And eventually these units should have enough of a competitive advantage to dominate.

And this seems a very good thing! Arguably, the biggest thing that goes wrong in the world today is that we fail to take a long view. Because we fail to much consider the long run in our choices, we put a vast great future at risk, such as by tolerating avoidable existential risks. This will end once the dominant units take a long view. At that point there may be fights on which direction the future should take, and coordination failures may lead to bad outcomes, but at least the future will not be neglected.

The future not being neglected seems such a wonderfully good outcome that I’m tempted to call the “Long View Day” when this starts one of the most important future dates. And working to hasten that day could be one of the most important things we can do to help the future. So I hereby call to action those who (say they) care about the distant future to help in this task.

A great feature of this task is that it doesn’t require great coordination; it is a “race to the top”. That is, it is in the interest of each cultural unit (nation, language, ethnicity, religion, city, firm, family, etc.) to figure out how to take effective long term views. So you can help the world by allying with a particular unit and helping it learn to take an effective long term view. You don’t have to choose between “selfishly” helping your unit, or helping the world as a whole.

One way to try to promote longer term views is to promote longer human lifespans. Its not that clear to me this works, however, as even immortals can prioritize the short run. And extending lifespans is very hard. But it is a fine goal in any case.

A bad way to encourage long views is to just encourage the adoption of plans that advocates now claim are effective ways to help in the long run. After all, it seems that one of the main obstacles so far to taking long views is the typical low quality of long-term plans offered. Instead, we must work to make long term planning processes more reliable.

My guess is that a key problem is worse incentives and accountability for those who make long term predictions, and who propose and implement longer term plans. If your five year plan goes wrong, that could wreck your career, but you might make a nice long comfy career our of a fifty year plan that will later go wrong. So we need to devise and test new ways to create better incentives for long term predictions and plans.

You won’t be surprised to hear me say I think prediction markets have promise as a way to create better incentives and accountability. But we haven’t experimented that much with long-term prediction markets, and they have some potential special issues, so there’s a lot of work to do to explore this approach.

Once we find ways to make more reliable long term plans, we will still face the problem that organizations are typically under the control of humans, who seem to consistently act on short term views. In my Age of Em scenario, this could be solved by having slower ems control long term choices, as they would naturally have longer term views. Absent ems, we may want to experiment with different cultural contexts for familiar ordinary humans, to see which can induce such humans to prioritize the long term.

If we can’t find contexts that make ordinary humans take long term views, we may want to instead create organizations with longer term views. One approach would be to release enough of them from tight human controls, and subject them to selection pressures that reward better long term views. For example, evolutionary finance studies what investment organizations free to reinvest all their assets would look like if they were selected for their ability to grow assets well.

Some will object to the creation of powerful entities whose preferences disagree with those of familiar humans alive at the time. And I admit that gives me pause. But if taken strictly that attitude seems to require that the future always remain neglected, if ordinary humans discount the future. I fear that may be too high a price to pay.

GD Star Rating
loading...
Tagged as: ,

Long Legacies And Fights In A Competitive Universe

My last post discussed how to influence the distant future, using a framework focused on a random uncaring universe. This is, for example, the usual framework of most who see themselves as future-oriented “effective altruists”. They see most people and institutions as not caring much about the distant future, and they themselves as unusual exceptions in three ways: 1) their unusual concern for the distant future, 2) their unusual degree of general utilitarian altruistic concern, and 3) their attention to careful reasoning on effectiveness.

If few care much or effectively about the distant future, then efforts to influence that distant future don’t much structure our world, and so one can assume that the world is structured pretty randomly compared to one’s desires and efforts to influence the distant future. For example, one need not be much concerned about the possibility that others have conflicting plans, or that they will actively try to undermine one’s plans. In that case the analysis style of my last post seems appropriate.

But it would be puzzling if such a framework were so appropriate. After all, the current world we see around us is the result of billions of years of fierce competition, a competition that can be seen as about controlling the future. In biological evolution, a fierce competition has selected species and organisms for their ability to make future organisms resemble them. More recently, within cultural evolution, cultural units (nations, languages, ethnicities, religions, regions, cities, firms, families, etc.) have been selected for their ability to make future cultural units resemble them. For example, empires have been selected for their ability to conquer neighboring regions, inducing local residents to resemble them more than they do conquered empires.

In a world of fierce competitors struggling to influence the future, it makes less sense for any one focal alliance of organism, genetic, and cultural units (“alliance” for short in the rest of this post) to assume a random uncaring universe. It instead makes more sense to ask who has been winning this contest lately, what strategies have been helping them, and what advantages this one alliance might have or could find soon to help in this competition. Competitors would search for any small edge to help them pull even a bit ahead of others, they’d look for ways to undermine rivals’ strategies, and they’d expect rivals to try to undermine their own strategies. As most alliances lose such competitions, one might be happy to find a strategy that allows one to merely stay even for a while. Yes, successful strategies sometimes have elements of altruism, but usually as ways to assert prestige or to achieve win-win coordination deals.

Furthermore, in a world of fiercely competing alliances, one might expect to have more success at future influence via joining and allying strongly with existing alliances, rather than by standing apart from them with largely independent efforts. In math there is often an equivalence between “maximize A given a constraint on B” and “maximize B given a constraint on A”, in the sense that both formulations give the same answers. In a related fashion, similar efforts to influence the future might be framed in either of two rather different ways:

  1. I’m fundamentally an altruist, trying to make the world better, though at times I choose to ally and compromise with particular available alliances.
  2. I’m fundamentally a loyal member/associate of my alliance, but I think that good ways to help it are to a) prevent the end of civilization, b) promote innovation and growth within my alliance, which indirectly helps the world grow, and c) have my alliance be seen as helping the world in a way which raises its status and reputation.

This second framing seems to have some big advantages. People who follow it may win the cooperation, support, and trust of many members of a large and powerful alliance. And such ties and supports may make it easier to become and stay motivated to continue such efforts. As I said in my last post, people seem much more motivated to join fights than to simply help the world overall. Our evolved inclinations to join alliances probably create this stronger motivation.

Of course if in fact most all substantial alliances today are actually severely neglecting the distant future, then yes it can make more sense to mostly ignore them when planning to influence the distant future, except for minor connections of convenience. But we need to ask: how strong is the evidence that in fact existing alliances greatly neglect the long run today? Yes, they typically fail to adopt policies that many advocates say would help in the long run, such as global warming mitigation. But others disagree on the value of such policies, and failures to act may also be due to failures to coordinate, rather than to a lack of concern about the long run.

Perhaps the strongest evidence of future neglect is that typical financial rates of return have long remained well above growth rates, strongly suggesting a direct discounting of future outcomes due to their distance in time. For example, these high rates of return are part of standard arguments that it will be cheaper to accommodate global warming later, rather than to prevent it today. Evolutionary finance gives us theories of what investing organizations would do when selected to take a long view, and it doesn’t match what we see very well. Wouldn’t an alliance with a long view take advantage of high rates of return to directly buy future influence on the cheap? Yes, individual humans today have to worry about limited lifespans and difficulties controlling future agents who spend their money. But these should be much less of an issue for larger cultural units. Why don’t today’s alliances save more?

Important related evidence comes from data on our largest longest-term known projects. Eight percent of global production is now spent on projects that cost over one billion dollars each. These projects tend to take many years, have consistent cost and time over-runs and benefit under-runs, and usually are net cost-benefit losers. I first heard about this from Freemon Dyson, in the “Fast is Beautiful” chapter of Infinite in All Directions. In Dyson’s experience, big slow projects are consistent losers, while fast experimentation often makes for big wins. Consider also the many large slow and failed attempts to aid poor nations.

Other related evidence include having the time when a firm builds a new HQ be a good time to sell their stock, futurists typically do badly at predicting important events even a few decades into the future, and the “rags to riches to rags in three generations” pattern whereby individuals who find ways to grow wealth don’t pass such habits on to their grandchildren.

A somewhat clear exception where alliances seem to pay short term costs to promote long run gains is in religious and ideological proselytizing. Cultural units do seem to go out of their way to indoctrinate the young, to preach to those who might convert, and to entrench prior converts into not leaving. Arguably, farming era alliances also attended to the long run when they promoted fertility and war.

So what theories do we have to explain this data? I can see three:

1) Genes Still Rule – We have good theory on why organisms that reproduce via sex discount the future. When your kids only share half of your genes, if you consider spending on yourself now versus on your kid one generation later, you discount future returns at roughly a factor of two per generation, which isn’t bad as an approximation to actual financial rates of return. So one simple theory is that even though cultural evolution happens much faster than genetic evolution, genes still remain in firm control of cultural evolution. Culture is a more effective ways for genes to achieve their purposes, but genes still set time discounts, not culture.

2) Bad Human Reasoning – While humans are impressive actors when they can use trial and error to hone behaviors, their ability to reason abstractly but reliably to construct useful long term plans is terrible. Because of agency failures, cognitive biases, incentives to show off, excess far views, overconfidence, or something else, alliances learned long ago not to trust to human long term plans, or to accumulations of resources that humans could steal. Alliances have traditionally invested in proselytizing, fertility, prestige, and war because those gains are harder for agents to mismanage or steal via theft and big bad plans.

3) Cultures Learn Slowly – Cultures haven’t yet found good general purpose mechanisms for making long term plans. In particular, they don’t trust organized groups of humans to make and execute long term plans for them, or to hold assets for them. Cultures have instead experimented with many more specific ways to promote long term outcomes, and have only found successful versions in some areas. So they seem to act with longer term views in a few areas, but mostly have not yet managed to find ways to escape the domination of genes.

I lean toward this third compromise strategy. In my next post, I’ll discuss a dramatic prediction from all this, one that can greatly influence our long-term priorities. Can you guess what I will say?

GD Star Rating
loading...
Tagged as: , , ,

Long Legacies And Fights In An Uncaring Universe

What can one do today to have a big predictable influence on the long-term future? In this post I’ll use a simple decision framework, wherein there is no game or competition, one is just trying to influence a random uncaring universe. I’ll summarize some points I’ve made before. In my next post I’ll switch to a game framework, where there is more competition to influence the future.

Most random actions fail badly at this goal. That is, most parameters are tied to some sort of physical, biological, or social equilibrium, where if you move a parameter away from its current setting, the world tends to push it back. Yes there are exceptions, where a push might “tip” the world to a new rather different equilibrium, but in spaces where most points are far from tipping points, such situations are rare.

There is, however, one robust way to have a big influence on the distant future: speed up or slow down innovation and growth. The extreme version of this preventing or causing extinction; while quite hard to do, this has enormous impact. Setting that aside, as the world economy grows exponentially, any small change to its current level is magnified over time. For example, if one invents something new that lasts then that future world is more able to make more inventions faster, etc. This magnification grows into the future until the point in time when growth rates must slow down, such as when the solar system fills up, or when innovations in physical devices run out. By speeding up growth, you can prevent the waste all the negentropy that is and will continue to be destroyed until our descendants managed to wrest control of such processes.

Alas making roughly the same future happen sooner versus later doesn’t engage most people emotionally; they are much more interested in joining a “fight” over what character the future will take at any give size. One interesting way to take sides while still leveraging growth is to fund a long-lived organization that invests and saves its assets, and then later spends those assets to influence some side in a fight. The fact that investment rates of return have long exceeded growth rates suggests that one could achieve disproportionate influence in this way. Oddly, few seem to try this strategy.

Another way to leverage growth to influence future fights is via fertility: have more kids who themselves have more kids, etc. While this is clearly a time-tested strategy, we are in an era with a puzzling disinterest in fertility, even among those who claim to seek long-term influence.

Another way to join long-term fights is to add your weight to an agglomeration process whereby larger systems slowly gain over smaller ones. For example if the nations, cities, languages, and art genres with more participants tend to win over time, you can ally with one of these to help to tip the balance. Of course this influence only lasts as long as do these things. For example, if you push for short vs long hair in the current fashion change, that effect may only last until the next hair fashion cycle.

Pushing for the creation of a particular world government seems an extreme example of this agglomeration effect. A world government might last a very long time, and retain features from those who influenced its source and early structure.

One way to have more influence on fights is to influence systems that are plastic now but will become more rigid later. This is the logic behind persuading children while they are still ignorant and gullible, before they become ignorant and stubbornly unchanging adults. Similarly one might want to influence a young but growing firm or empire. This is also the logic behind trying to be involved in setting patterns and standards during the early days of a new technology. I remember hearing people say this explicitly back when Xanadu was trying to influence the future web. People who influenced the early structure of AM radio and FAX machines had a disproportionate influence, though such influence greatly declines when such systems themselves later decline.

The farming and industrial revolutions were periods of unusual high amounts of change, and we may encounter another such revolution in a century or so. If so, it might be worth saving and collecting resources in preparation for the extra influence available during this next great revolution.

GD Star Rating
loading...
Tagged as: ,

Intellectual Status Isn’t That Different

In our world, we use many standard markers of status. These include personal connections with high status people and institutions, power, wealth, popularity, charisma, intelligence, eloquence, courage, athleticism, beauty, distinctive memorable personal styles, and participation in difficult achievements. We also use these same status markers for intellectuals, though specific fields favor specific variations. For example, in economics we favor complex game theory proofs and statistical analyses of expensive data as types of difficult achievements.

When the respected intellectuals for topic X tell the intellectual history of topic X, they usually talk about a sequence over time of positions, arguments, and insights. Particular people took positions and offered arguments (including about evidence), which taken together often resulted in insight that moved a field forward. Even if such histories do not say so directly, they give the strong impression that the people, positions, and arguments mentioned were selected for inclusion in the story because they were central to causing the field to move forward with insight. And since these mentioned people are usually the high status people in these fields, this gives the impression that the main way to gain status in these fields is to offer insight that produces progress; the implication is that correlations with other status markers are mainly due to other markers indicating who has an inclination and ability to create insight.

Long ago when I studied the history of science, I learned that these standard histories given by insiders are typically quite misleading. When historians carefully study the history of a topic area, and try to explain how opinions changed over time, they tend to credit different people, positions, and arguments. While standard histories tend to correctly describe the long term changes in overall positions, and the insights which contributed to those changes, they are more often wrong about which people and arguments caused such changes. Such histories tend to be especially wrong when they claim that a prominent figure was the first to take a position or make an argument. One can usually find lower status people who said basically the same things before. And high status accomplishments tend to be given more credit than they deserve in causing opinion change.

The obvious explanation for these errors is that we are hypocritical about what counts for status among intellectuals. We pretend that the point of intellectual fields is to produce intellectual progress, and to retain past progress in people who understand it. And as a result, we pretend that we assign status mainly based on such contributions. But in fact we mostly evaluate the status of intellectuals in the same way we evaluate most everyone, not changing our markers nearly as much as we pretend in each intellectual context. And since most of the things that contribute to status don’t strongly influence who actually offers positions and arguments that result in intellectual insight and progress, we can’t reasonably expect the people we tend to pick as high status to typically have been very central to such processes. But there’s enough complexity and ambiguity in intellectual histories to allow us to pretend that these people were very central.

What if we could make the real intellectual histories more visible, so that it became clearer who caused what changes via their positions, arguments, and insight? Well then fields would have the two usual choices for how to respond to hypocrisy exposed: raise their behaviors to meet their ideals, or lower their ideals to meet their behaviors. In the first case, the desire for status would drive much strong efforts to actually produce insights that drives progress, making plausible much faster rates of progress. In this case it could well be worth spending half of all research budgets on historians to carefully track who contributed how much. The factor of two lost in all that spending on historians might be more than compensated by intellectuals focused much more strongly on producing real insight, instead of on the usual high-status-giving imitations.

Alas I don’t expect many actual funders of intellectual activity today to be tempted by this alternative, as they also care much more about achieving status, via affiliation with high status intellectuals, than they do about producing intellectual insight and progress.

GD Star Rating
loading...
Tagged as: , ,