Author Archives: Robin Hanson

Replication Markets Team Seeks Journal Partners for Replication Trial

An open letter, from myself and a few colleagues:

Recent attempts to systematically replicate samples of published experiments in the social and behavioral sciences have revealed disappointingly low rates of replication. Many parties are discussing a wide range of options to address this problem.

Surveys and prediction markets have been shown to predict, at rates substantially better than random, which experiments will replicate. This suggests a simple strategy by which academic journals could increase the rate at which their published articles replicate. For each relevant submitted article, create a prediction market estimating its chance of replication, and use that estimate as one factor in deciding whether to publish that article.

We the Replication Markets Team seek academic journals to join us in a test of this strategy. We have been selected for an upcoming DARPA program to create prediction markets for several thousand scientific replication experiments, many of which could be based on articles submitted to your journal. Each market would predict the chance of an experiment replicating. Of the already-published experiments in the pool, approximately one in ten will be sampled randomly for replication. (Whether submitted papers could be included in the replication pool depends on other teams in the program.) Our past markets have averaged 70% accuracy, and the work is listed at the Science Prediction Market Project page, and has been published in Science, PNAS, and Royal Society Open Science.

While details are open to negotiation, our initial concept is that your journal would tell potential authors that you are favorably inclined toward experiment article submissions that are posted at our public archive of submitted articles. By posting their article, authors declare that they have submitted their article to some participating journal, though they need not say which one. You tell us when you get a qualifying submission, we quickly tell you the estimated chance of replication, and later you tell us of your final publication decision.

At this point in time we seek only an expression of substantial interest that we can take to DARPA and other teams. Details that may later be negotiated include what exactly counts as a replication, whether archived papers reveal author names, how fast we respond with our replication estimates, what fraction of your articles we actually attempt to replicate, and whether you privately give us any other quality indicators obtained in your reviews to assist in our statistical analysis.

Please RSVP to: Angela Cochran, PM acochran@replicationmarkets.com 571 225 1450

Sincerely, the Replication Markets Team

Thomas Pfeiffer (Massey University)
Yiling Chen, Yang Liu, and Haifeng Xu (Harvard University)
Anna Dreber Almenberg & Magnus Johannesson (Stockholm School of Economics)
Robin Hanson & Kathryn Laskey (George Mason University)

Added 2p: We plan to forecast ~8,000 replications over 3 years, ~2,000 within the first 15 months.  Of these, ~5-10% will be selected for an actual replication attempt.

GD Star Rating
loading...
Tagged as: , ,

Umpires Shouldn’t Be On Teams

There are many complex issues to consider when choosing between public vs private provision of a good or service. But one issue seems to me to clearly favor the private option: rights. If you want to make rights-enforcing rules that are actually followed, you are better off having courts or regulators enforcing rules on a competitive private industry.

Consider this excellent 2015 AJPS paper:

Many regulatory policies—especially health, safety, and environmental regulations—apply to government agencies as well as private firms. … Unlike profit‐maximizing firms, government agencies face contested, ambiguous missions and are politically constrained from raising revenue to meet regulatory requirements. At the same time, agencies do not face direct competition from other firms, rarely face elimination, and may have sympathetic political allies. Consequently, the regulator’s usual array of enforcement instruments (e.g., fines, fees, and licensure) may be potent enough to alter behavior when the target is a private firm, but less effective when the regulated entity is a government agency. …

The ultimate effect of regulatory policy turns not on the regulator’s carrots and sticks, but rather on the regulated agency’s political costs of compliance with or appeal against the regulator, and the regulator’s political costs of penalizing another government. One implication of this theory is that public agencies are less likely than similarly situated private firms to comply with regulations. Another implication is that regulators are likely to enforce regulations less vigorously against public agencies than against private firms because such enforcement is both less effective and more costly to the regulator. …

We find that public agencies are more likely than private firms to violate the regulatory requirements of the [US] Clean Air Act and the Safe Drinking Water Act. Moreover, we find that regulators are less likely to impose severe punishment for noncompliance on public agencies than on private firms. (more)

See also:

There is evidence … that [public entities] are [better] able to delay or avoid paying fines when penalties are assessed. (more)

Public sector employees experienced a higher incidence rate of work-related injuries and illnesses than their private industry counterparts. (more)

I’ve tried but failed to find stats on public vs private relative rates of abuse, harassment, bribery, embezzlement, nepotism, and test cheating. (Can you find more?) But I’d bet they’d also show government agencies violating such rules at higher rates.

This perspective seems very relevant to criminal justice reform. Our status quo criminal justice system embodies enormous inefficiencies and injustices, but when I propose changes that involve larger roles for private actors, I keep hearing “yes that might be more efficient, but won’t private actors create more rights violations?” But the above analysis suggests that this gets the comparison exactly wrong!

Yes of course, if you compare a public org that has a rule with a private actor to whom no such rules applies, you may get more rule “violations” with the latter. And yes, enforcement of central rules can be expensive and limiting, so sometimes it makes sense to use private competition as a substitute for central rules, and so impose fewer rules on private actors. But once we allow ourselves to choose which rules to impose, private orgs seem just overall better for enforcing rules.

Note that when a government agency directly contracts with a specific private organization, using complex flexible terms and monitoring, as in military procurement, the above theory predicts that this contractor will look much more like an extension of the government agency for the purpose of rule enforcement. Rule enforcement gains come instead from private orgs that compete to be chosen by the public, or that compete to win simple public prizes where public orgs do not have so much discretion over terms that they can pick winners, but get blamed for rights violations of losers.

It is these independent private actors that I seek to recruit to reform criminal justice. We will get more, not less, enforcement of rules that protect rights, when the umpires who enforce rights are less affiliated with the teams who can violate them.

GD Star Rating
loading...
Tagged as: , , ,

Most Progress Not In Morals

Everyone without exception believes his own native customs, and the religion he was brought up in, to be the best. Herodotus 440bc

Over the eons, we humans have greatly increased our transportation abilities. Long ago, we mostly walked everywhere. Then over time, we accumulated more ways to move ourselves and goods faster, cheaper, and more reliably, from boats to horses to gondolas to spaceships. Today, for most points A and B, our total cost to move from A to B is orders of magnitude cheaper than it would be via walking.

Even so, walking remains an important part of our transport portfolio. While we are able to move people who can’t walk, such as via wheelchairs, that is expensive and limiting. Yet while walking still matters, improvements in walking have contributed little to our long term gains in transport abilities. Most gains came instead from other transport methods. Most walking gains even came from other areas. For example, we can now walk better due to better boots, lighting, route planners, and paved walkways. Our ability to walk without such aids has improved much less.

As with transport, so with many other areas of life. Our ancient human abilities still matter, but most gains over time have come from other improvements. This applies to both physical and social tech. That is, to our space-time arrangements of physical materials and objects, and also to our arrangements of human actions, info and incentives.

Social scientists often use the term “institutions” broadly to denote relatively stable components social arrangements of actions, info and incentives. Some of the earliest human institutions were language and social norms. We have modestly improved human languages, such as via expanded syntax forms and vocabulary. And over history humans have experimented with a great range of social norms, and also with new ways to enforce them, such as oaths, law, and CCTV.

We still rely greatly on social norms to manage small families, work groups and friend groups. As with walking, while we could probably manage such groups in other ways, doing so would be expensive and limiting. So social norms still matter. But as with our walking, relatively little of our gains overtime has come from improving our ancient institution of social norms.

When humans moved to new environments, such as marshes or antic tundra, they had to adapt their generic walking methods to these new contexts. No doubt learning and innovation was involved in that process. Similarly, we no doubt continue to evolve our social norms and their methods of enforcement to deal with changing social contexts. Even so, social norm innovation seems a small part of total institutional innovation over the eons.

With walking, we seem well aware that walking innovation has only been a small part of total transport innovation. But we humans were built to at least pretend to care a lot about social norms. We consider opinions on and adherence to norms, and the shared values they support, to be central to saying who are “good” or “bad” people, and who we see as in “our people”. So we make norms central to our political fights. And we put great weight on norms when evaluating which societies are good, and whether the world has gotten better over time.

Thus each society tends to see its own origin, and the changes which led to its current norms, as enormously important and positive historical events. But if we stand outside any one society and consider the overall sweep of history, we can’t automatically count these as big contributions to long term innovation. After all, the next society is likely to change norms yet again. Most innovation is in accumulating improvements in all those other social institutions.

Now it is true that we have seen some consistent trends in attitudes and norms over the last few centuries. But wealth has also been rising, and having humans attitudes be naturally conditional on wealth levels seems a much better explanation of this fact than the theory that after a million years of human evolution we suddenly learned how to learn about norms. Yes it is good to adapt norms to changing conditions, but as conditions will likely change yet again, we can’t count that as long term innovation.

In sum: most innovation comes in additions to basic human capacities, not in tweaks to those original capacities. Most transport innovation is not in improved ways to walk, and most social institution innovation is not in better social norms. Even if each society would like to tell itself otherwise. To help the future the most, search more for better institutions, less for better norms.

GD Star Rating
loading...
Tagged as: ,

Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible. 

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating
loading...
Tagged as: , , ,

Toward An Honest Consensus

Star Trek original series featured a smart computer that mostly only answered questions; humans made key decisions. Near the start of Nick Chater’s book The Mind Is Flat, which I recently started, he said early AI visions were based on the idea of asking humans questions, and then coding their answers into a computer, which might then answer the same range of questions when asked. But to the surprise of most, typical human beliefs turned out to be much too unstable, unreliable, incoherent, and just plain absent to make this work. So AI research turned to other approaches.

Which makes sense. But I’m still inspired by that ancient vision of an explicit accessible shared repository of what we all know, even if that isn’t based on AI. This is the vision that to varying degrees inspired encyclopedias, libraries, internet search engines, prediction markets, and now, virtual assistants. How can we all coordinate to create and update an accessible shared consensus on important topics?

Yes, today our world contains many social institutions that, while serving other functions, also function to create and update a shared consensus. While we don’t all agree with such consensus, it is available as a decent first estimate for those who do not specialize in a topic, facilitating an intellectual division of labor.

For example: search engines, academia, news media, encyclopedias, courts/agencies, consultants, speculative markets, and polls/elections. In many of these institutions, one can ask questions, find closest existing answers, induce the creation of new answers, induce elaboration or updates of older answers, induce resolution of apparent inconsistencies between existing answers, and challenge existing answers with proposed replacements. Allowed questions often include meta questions such as origins of, translations of, confidence in, and expected future changes in, other questions.

These existing institutions, however, often seem weak and haphazard. They often offer poor and biased incentives, use different methods for rather similar topics, leave a lot of huge holes where no decent consensus is offered, and tolerate many inconsistencies in the answers provided by different parts. Which raises the obvious question: can we understand the advantages and disadvantages of existing methods in different contexts well enough to suggest which ones we should use more or less where, or to design better variations, ones that offer stronger incentives, lower costs, and wider scope and integration?

Of course computers could contribute to such new institutions, but they needn’t be the only or even main parts. And of course the idea here is to come up with design candidates to test first at small scales, scaling up only when results look promising. Design candidates will seem more promising if we can at least imagine using them more widely, and if they are based on theories that plausibly explain failings of existing institutions. And of course I’m not talking about pressuring people to follow a consensus, just to make a consensus available to those who want to use it.

As usual, a design proposal should roughly describe what acts each participant can do when, what they each know about what others have done, and what payoffs they each get for the main possible outcomes of typical actions. All in a way that is physically, computationally, and financially feasible. Of course we’d like a story about why equilibria of such a system are likely to produce accurate answers fast and at low cost, relative to other possible systems. And we may need to also satisfy hidden motives, the unacknowledged reasons for why people actually like existing institutions.

I have lots of ideas for proposals I’d like the world to consider here. But I realized that perhaps I’ve neglected calling attention to the problem itself. So I’ve written this post in the hope of inspiring some of you with a challenge: can you help design (or test) new robust ways to create and update a social consensus?

GD Star Rating
loading...
Tagged as: , ,

Perpetual Motion Via Negative Matter?

One of the most important things we will ever learn about the universe is just how big it is, practically, for our purposes. In the last century we’ve learned that it it is far larger than we knew, in a great many ways. At the moment we are pretty sure that it is about 13 billion years old, and that it seems much larger in spatial directions. We have decent estimates for both the total space-time volume we can ever see, and all that we can ever influence.

For each of these volumes, we also have decent estimates of the amount of ordinary matter they contain, how much entropy that now contains, and how much entropy it could create via nuclear reactions. We also have decent estimates of the amount of non-ordinary matter, and of the much larger amount of entropy that matter of all types could produce if collected into black holes.

In addition, we have plausible estimates of how (VERY) long it will take to actually use all that potential entropy. If you recall, matter and volume is what we need to make stuff, and potential entropy, beyond current actual entropy, (also known as “negentropy”) is they key resource needed to drive thus stuff in desired directions. This includes both biological life and artificial machinery.

Probably the thing we most care about doing with all that stuff in the universe this is creating and sustaining minds like ours. We know that this can be done via bodies and brains like ours, but it seems that far more minds could be supported via artificial computer hardware. However, we are pretty uncertain about how much computing power it takes (when done right) to support a mind like ours, and also about how much matter, volume, and entropy it takes (when done right) to produce any given amount of computing power.

For example, in computing theory we don’t even know if P=NP. We think this claim is false, but if true it seems that we can produce vastly more useful computation with any given amount of computing power, which probably means sustaining a lot more minds. Though I know of no concrete estimate of how many more.

It might seem that at least our physics estimates of available potential entropy are less uncertain that this, but I was recently reminded that we actually aren’t even sure that this amount is finite. That is, it might be that our universe has no upper limit to entropy. In which case, one could keep run physical processes (like computers) that increase entropy forever, create proverbial “perpetual motion machines”. Some say that such machines are in conflict with thermodynamics, but that is only true if there’s a maximum entropy.

Yes, there’s a sense in which a spatially infinite universe has infinite entropy, but that’s not useful for running any one machine. Yes, if it were possible to perpetually create “baby universes”, then one might perpetually run a machine that can fit each time into the entrance from one universe into its descendant universe. But that may be a pretty severe machine size limit, and we don’t actually know that baby universes are possible. No, what I have in mind here is the possibility of negative mass, which might allow unbounded entropy even in a finite region of ordinary space-time.

Within the basic equations of Newtonian physics lie the potential for an exotic kind of matter: negative mass. Just let the mass of some particles be negative, and you’ll see that gravitationally the negative masses push away from each other, but are drawn toward the positive masses, which are drawn toward each other. Other forces can exist too, and in terms of dynamics, it’s all perfectly consistent.

Now today we formally attribute the Casimir effect to spatial regions filled with negative mass/energy, and we sometimes formally treat the absence of a material as another material (think of bubbles in water), and these often formally have negative mass. But other than these, we’ve so far not seen any material up close that acts locally like it has negative mass, and this has been a fine reason to ignore the possibility.

However, we’ve known for a while now that over 95% of the universe seems to be made of unknown stuff that we’ve never seen interact with any of the stuff around us, except via long distance gravity interactions. And most of that stuff seems to be a “dark energy” which can be thought of as having a negative mass/energy density. So negative mass particles seem a reasonable candidate to consider for this strange stuff. And the reason I thought about this possibility recently is that I came across this article by Jamie Farnes, and associated commentary. Farnes suggests negative mass particles may fill voids between galaxies, and crowd around galaxies compacting them, simultaneously explaining galaxy rotation curves and accelerating cosmic expansion.

Apparently, Einstein considered invoking negative mass particles to explain (what he thought was) the observed lack of cosmic expansion, before he switched to a more abstract explanation, which he dropped after cosmic expansion was observed. Some say that Farnes’s attempt to integrate negative mass into general relative and quantum particle physics fails, and I have no opinion on that. Here I’ll just focus on simpler physics considerations, and presume that there must be some reasonable way to extend the concept of negative mass particles in those directions.

One of the first things one usually learns about negative mass is what happens in the simple scenario wherein two particles with exactly equal and opposite masses start off exactly at rest relative to one another, and have any force between them. In this scenario, these two particles accelerate together in the same direction, staying at the same relative distance, forevermore. This produces arbitrarily large velocities in simple Newtonian physics, and arbitrarily larger absolute masses in relativistic physics. This seems a crazy result, and it probably put me off from of the negative mass idea when I first heard about it.

But this turns out to be an extremely unusual scenario for negative mass particles. Farnes did many computer simulations with thousands of gravitationally interacting negative and positive mass particles of exactly equal mass magnitudes. These simulations consistently “reach dynamic equilibrium” and “no runaway particles were detected”. So as a matter of practice, runaway seems quite rare, at least via gravity.

A related worry is that if there were a substantial coupling associated with making pairs of positive and negative mass particles that together satisfy relative conservation laws, such pairs would be created often, leading to a rapid and apparently unending expansion in total particle number. But the whole idea of dark stuff is that it only couples very weakly to ordinary matter. So if we are to explain dark stuff via negative mass particles, we can and should postulate no strong couplings that allow easy creation of pairs of positive and negative mass particles.

However, even if the postulate of negative mass particles were consistent with all of our observations of a stable pretty-empty universe (and of course that’s still a big if), the runaway mass pair scenario does at least weakly suggest that entropy may have no upper bound when negative masses are included. The stability we observe only suggests that current equilibrium is “metastable” in the sense of not quickly changing.

Metastability is already known to hold for black holes; merging available matter into a few huge black holes could vastly increase entropy, but that only happens naturally at a very slow rate. By making it happen faster, our descendants might greatly increase their currently available potential entropy. Similarly, our descendants might gain even more potential entropy by inducing interactions between mass and negative mass that would naturally be very rare.

That is, we don’t even know if potential entropy is finite, even within a finite volume. Learning that will be very big news, for good or bad.

GD Star Rating
loading...
Tagged as: ,

Choose: Allies or Accuracy

Imagine that person A tells you something flattering or unflattering about person B. All else equal, this should move your opinion of B in the direction of A’s claim. But how far? If you care mainly about accuracy, you’ll want to take into account base rates on claimers A and targets B, as well as more specific specific signs on the accuracy of A regarding B.

But what if you care mainly about seeming loyal to your allies? Well if A is more of your ally than is B, as suggested by your listening now to A, then you’ll be more inclined to just believe A, no matter what. Perhaps if other allies give a different opinion, you’ll have to decide which of your allies to back. But if not, trying to be accurate on B mainly risks seeming disloyal to A and you’re other allies.

It seems that humans tend to just believe gossip like this, mostly ignoring signs of accuracy:

The trustworthiness of person-related information … can vary considerably, as in the case of gossip, rumors, lies, or “fake news.” …. Social–emotional information about the (im)moral behavior of previously unknown persons was verbally presented as trustworthy fact (e.g., “He bullied his apprentice”) or marked as untrustworthy gossip (by adding, e.g., allegedly), using verbal qualifiers that are frequently used in conversations, news, and social media to indicate the questionable trustworthiness of the information and as a precaution against wrong accusations. In Experiment 1, spontaneous likability, deliberate person judgments, and electrophysiological measures of emotional person evaluation were strongly influenced by negative information yet remarkably unaffected by the trustworthiness of the information. Experiment 2 replicated these findings and extended them to positive information. Our findings demonstrate a tendency for strong emotional evaluations and person judgments even when they are knowingly based on unclear evidence. (more; HT Rolf Degen)

I’ve toyed with the idea of independent juries to deal with Twitter mobs. Pay a random jury a modest amount to 1) read a fuller context and background on the participants, 2) talk a bit among themselves, and then 3) choose which side they declare as more reasonable. Sure sometimes the jury would hang, but often they could give a voice of reason that might otherwise be drown out by loud participants. I’d have been willing to pay for this a few times. And once juries became a standard thing, we could lower costs via making prediction markets on jury verdicts if a case were randomly choose for jury evaluation.

But alas, I’m skeptical that most would care much about what an independent jury is estimated to say, or even about what it actually says. For that, they’d have to care more about truth than about showing support for allies.

GD Star Rating
loading...
Tagged as: , ,

The Aristillus Series

There’s a contradiction at the heart of science fiction. Science fiction tends to celebrate the engineers and other techies who are its main fans. But there are two conflicting ways to do this. One is to fill a story with credible technical details, details that matter to the plot, and celebrate characters who manage this detail well. The other approach is to present tech as the main cause of an impressive future world, and of big pivotal events in that world.

The conflict comes from it being hard to give credible technical details about an impressive future world, as we don’t know much about future tech. One can give lots of detail about current tech, but people aren’t very impressed with the world they live in (though they should be). Or one can make up detail about future tech, but that detail isn’t very credible.

A clever way to mitigate this conflict is to introduce one dramatic new tech, and then leave all other tech the same. (Vinge gave a classic example.) Here, readers can be impressed by how big a difference one new tech could make, and yet still revel in heroes who win in part by mastering familiar tech detail. Also, people like me who like to think about the social implications of tech can enjoy a relatively manageable task: guess how one big new tech would change an otherwise familiar world.

I recently enjoyed the science fiction book pair The Aristillus Series: Powers of the Earth, and Causes of Separation, by Travis J I Corcoran (@MorlockP), funded in part via Kickstarter, because it in part followed this strategy. Also, it depicts betting markets as playing a small part in spreading info about war details. In addition, while most novels push some sort of unrealistic moral theme, the theme here is at least relatively congenial to me: nice libertarians seek independence from a mean over-regulated Earth:

Earth in 2064 is politically corrupt and in economic decline. The Long Depression has dragged on for 56 years, and the Bureau of Sustainable Research is making sure that no new technologies disrupt the planned economy. Ten years ago a band of malcontents, dreamers, and libertarian radicals used a privately developed anti-gravity drive to equip obsolete and rusting sea-going cargo ships – and flew them to the moon.There, using real world tunnel-boring-machines and earth-moving equipment, they’ve built their own retreat.

The one big new tech here is anti-gravity, made cheaply from ordinary materials and constructible by ordinary people with common tools. One team figures it out, and for a long time no other team has any idea how to do it, or any remotely similar tech, and no one tries to improve it; it just is.

Attaching antigrav devices to simple refitted ocean-going ships, our heroes travel to the moon, set up a colony, and create a smuggling ring to transport people and stuff to there. Aside from those magic antigravity devices, these books are choc full of technical mastery of familiar tech not much beyond our level, like tunnel diggers, guns, space suits, bikes, rovers, crypto signatures, and computers software. These are shown to have awkward gritty tradeoffs, like most real tech does.

Alas, Corcoran messes this up a bit by adding two more magic techs: one superintelligent AI, and a few dozen smarter-than-human dogs. Oh and the same small group is implausibly responsible for saving all three magic techs from destruction. As with antigravity, in each case one team figures it out, no other team has any remotely similar tech, and no one tries to improve them. But these don’t actually matter that much to the story, and I can hope they will be cut if/when this is made into a movie.

The story begins roughly a decade after the moon colony started, when it has one hundred thousand or a million residents. (I heard conflicting figures at different points.) Compared to Earth folk, colonists are shown as enjoying as much product variety, and a higher standard of living. This is attributed to their lower regulation.

While Earth powers dislike the colony, they are depicted at first as being only rarely able to find and stop smugglers. But a year later, when thousands of ships try to fly to the moon all at once from thousands of secret locations around the planet, Earth powers are depicted as being able to find and shoot down 90% of them. Even though this should be harder when thousands fly at once. This change is never explained.

Even given the advantage of a freer economy, I find it pretty implausible that a colony could be built this big and fast with this level of variety and wealth, all with no funding beyond what colonists can carry. The moon is a long way from Earth, and it is a much harsher environment. For example, while colonists are said to have their own chip industry to avoid regulation embedded in Earth chips, the real chip industry has huge economies of scale that make it quite hard to serve only one million customers.

After they acquire antigrav tech, Earth powers go to war with the moon. As the Earth’s economy is roughly ten thousand times larger that the moon’s, without a huge tech advantage is a mystery why anyone thinks the moon has any chance whatsoever to win this war.

The biggest blunder, however, is that no one in the book imagines using antigrav tech on Earth. But if the cost to ship stuff to the moon using antigrav isn’t crazy high, then antigravity must make it far cheaper to ship stuff around on Earth. Antigrav could also make tall buildings cheaper, allowing much denser city centers. The profits to be gained from these applications seem far larger than from smuggling stuff to a small poor moon colony.

So even if we ignore the AI and smart dogs, this still isn’t a competent extrapolation of what happens if we add cheap antigravity to a world like ours. Which is too bad; that would be an interesting scenario to explore.

Added 5:30p: In the book, antigrav is only used to smuggle stuff to/from moon, until it is used to send armies to the moon. But demand for smuggling should be far larger between places on Earth. In the book thousands of ordinary people are seen willing to make their own antigrav devices to migrate to moon, But a larger number should be making such devices to smuggle stuff around on Earth.

GD Star Rating
loading...
Tagged as: , , ,

When OK to Discriminate?

Two days ago I asked 8 related questions via Twitter. Here is one:

The rest of the questions made one of two changes. One change was to swap the type of choice from work/life to “producer (P) of a good or service to choose its customers (or price), or for a consumer (C) to choose from whom it buys”. The other change was to swap the choice basis from “political views or ideology” to “age”, “sex/gender”, or “race/ethnicity”. Here is the table of answer percentages (and total votes):

(Column “W not L” means “P not C” for relevant rows. Matching tweets, by table row #: 1,2,3,4,5,6,7,8.)

While the people who answered my poll are not a random sample of my nation or planet, I still think we can draw some tentative conclusions:

1) People are consistently more forgiving of discrimination in living spaces relative to work, and by consumers relative to producers. Almost no one is willing to allow it for work/producers, and yet not for living/consumers.

2) Opinion varies a lot. Aside from the empty column just described, most other answers get substantial support. Thought it seems few are against using age or sex/gender to choose who you live with.

3) Some kinds of bases are more accepted than others. Support was weakest for discrimination using race/ethnicity, and strongest for using age.

4) There seems to be more support for treating work and living mates differently than for treating producers and consumers differently.

Of course we’d learn more from a large poll asking more specific questions.

GD Star Rating
loading...
Tagged as: ,

Do I Offend?

The last eight months have seen four episodes where many people on Twitter called me a bad offensive person, often via rude profanity, sometimes calling for me to be fired or arrested. These four episodes were: sex inequality and redistribution, chances of a delayed harassment complaint, morality-induced overconfidence on historical counterfactuals, and implicit harassment in A Star Is Born. While these topics have occupied only a small fraction of my thought over these months, and a much smaller fraction over my career, they may have disproportionate effects on my reputation. So I’ve tried to pay close attention to the reasons people give. 

I think I see a consistent story. While in these cases I have not made moral, value, or political claims, when people read small parts of what I’ve claimed or asked, they say they can imagine someone writing those words for the purpose of promoting political views they dislike. And not just mild views that just a bit on other side of the political spectrum. No, they attribute to me the most extreme bad views imaginable, such as that I advocate rape, murder, slavery, and genocide. People say they are directly and emotionally traumatized by the offensive “creepy” feeling they get when they encounter someone with any prestige and audience seeming to publicly promote views with which they strongly disagree.

Some plausibly contributing factors here include my sometimes discussing sensitive topics, our increasing political polarization, the ease of making mobs and taking words out of context on Twitter, increasing ease of making new accusations similar to previous ones, and my terse and analytic writing style combined with my adding disclaimers re my allegiance to “correct” views. There’s also my following the standard poll practice of not telling those who answer polls the motives for those polls. And I’m a non-poor older white male associated with economics in general and GMU econ in particular; many see all these as indicators of bad political views. 

Digging a little deeper, trauma is plausibly increased by a poll format, which stokes fears that bad people will find out that they are not alone, and be encouraged to learn that many others share their views. I suspect this helps explain complaints that my poll population is not representative of my nation or planet.  

I also suspect bad faith. Long ago when I had two young kids, they would sometimes pick fights, for example on long car trips. One might start singing, to which the other would complain. We might agree that singing is too much for such a small space. Then the first might start to quietly hum, which we might decide is okay. Then first might hum more loudly and triumphantly, while the second might writhe, cover their ears, and make a dramatic display of suffering. 

Similarly, I suspect bad faith when some a) claim to experience “harassment” level suffering due to encountering political views with which they disagree, and yet are fine with high levels of sex, violence, and profanity in TV & movies, b) infer indirectly from my neutral analytical text that I promote the most extreme views imaginable, and c) do not notice that such claims are both a priori implausible and inconsistent with my large corpus of public writing; they either haven’t read much of it or purposely mischaracterize it. 

The idea of a large shared intellectual sphere wherein we can together analyze difficult topics holds a strong appeal to me. The main criteria for consideration in such a sphere should be the coherence and persuasiveness of specific relevant arguments. When evaluating each arguments, there is usually little need to infer distantly related positions of those who offer arguments. Usually an argument either works or it doesn’t, regardless of who says it or why.

I try to live up to such ideals in how I write and talk. I hope that many who read and follow me share these ideals, and I appreciate their support. I’m thus not favorably inclined toward suggestions that I stop discussing sensitive topics, or that adopt a much more elaborate disclaimer style, or that I stop asking my followers questions, to prevent others from being traumatized by hearing their answers, and or to keep followers from finding out that others share their opinions.

Added 29Dec:  I did 4 follow up polls to probe tendencies to take offense, focusing on the Nazi case. Respondents said the fraction of tweeters who actually wish Nazis had won WWII is tiny; 63% said it is <0.1%, though 4% gave >10%. And 79% said that this Nazi fraction is <3% among those “who mention `Nazis’ neutrally in a tweet, without explicitly praising or criticizing them, and who explicitly claim otherwise”, though 10% said >15%. Also, 58% said that for a tweet to be considered “offensive” or “harassment”, it would need to suggest a chance >50% that its author actually wishes Nazis had won WWII. However, 10% gave a threshold of <3% and 19% gave one <15%.

Finally, 43% gave a <3% “chance the author of a Twitter poll which asks about chance world would have been better off had Nazis won WWII, actually wishes that Nazis had won WWII”. However 20% gave a chance >50%, and 37% gave a chance >15%.

A obvious conclusion here is that, even among those who respond to my twitter polls, a substantial fraction have set hair-triggers for offense. For example, it seems >20% say merely asking if the world would have been better off if Nazis had won justifies a high enough chance of a Nazi author to be offensive. Explicit denials may help, but if the offended are much more vocal than are others, a vocal choir of objection seems largely inevitable.

This makes me wonder again if the “silent majority” might benefit from juries or polls which show them that the vocal offended are a minority. Though that minority will likely also express offense re such juries or polls.

Added 28Jan: A recent burst of outrage on the Star is Born episode confirms this account to some extent.

GD Star Rating
loading...
Tagged as: , , ,