Powered by Disqus
Extremists hold extreme views, and struggle to persuade others of their views, or even to get them to engage such views. Since most people are not extremists, you might think extremists focus mostly on persuading non-extremists. If so, they should have a common cause in getting ordinary people to think outside the usual boxes. They should want to join together to say that the usual views tend to gain from conformity pressures, and that such views are held overconfidently.
But in fact extremists don’t seem interested in joining together to support extremism. While each individual extremist tends to hold multiple extreme views, extremists groups go out of their way to distance themselves from other extremist groups. Not only do they often hate close cousins who they see as having betrayed their cause, they are also hostile to extremists groups on orthogonal topics.
This all makes sense if, as I’ve suggested, there are extremist personality types. Extremist groups have a better chance of attracting these types to their particular sort of extremism, relative to persuading ordinary folks to adopt extreme views.
This is our monthly place to discuss related topics that have not appeared in recent posts.
This Monday at 3:30p I talk on interstellar colonization at the Engineering Colloquim of NASA Goddard:
Attempts to model interstellar colonization may seem hopelessly compromised by uncertainties regarding the technologies and preferences of advanced civilizations. However, if light speed limits travel speeds and reliability limits travel distances, then a selection effect may eventually determine behavior at the colonization frontier. Making weak assumptions about colonization technology, I use this selection effect to predict colonists’ behavior, including which oases they colonize, how long they stay there, how many seeds they then launch, how fast and far those seeds fly, and how behavior changes with increasing congestion. This colonization model might explain some astrophysical puzzles, predicting lone oases like ours, amid large quiet regions with vast unused resources. (more here; here)
Hal Finney made 33 posts here on Overcoming Bias from ’06 to ’08. I’d known Hal long before that, starting on the Extropians mailing list in the early ‘90s, where Hal was one of the sharpest contributors. We’ve met in person, and Hal has given me thoughtful comments on some of my papers (including on this, this, & this). So I was surprised to learn from this article (key quotes below) that Hal is a plausible candidate for being (or being part of) the secretive Bitcoin founder, “Satoshi Nakamoto”.
Arguments for this conspiracy theory:
The arguments against this conspiracy theory:
The notion that Finney alone might have set up the two accounts and created a fake conversation with himself to throw off snoops like me, long before Bitcoin had any measurable value, seemed preposterous.
That last point seems pretty weak. We already know that the Bitcoin founder wants to be hidden. If Hal really created Bitcoin, he is plenty smart enough to think that Bitcoin might succeed, and to think of and implement the idea of creating fake conversations to cover his tracks. In this case Hal would also plausibly lie about his C++ skills, or maybe he got C++ help from someone else. In any case the probability of seeing those things conditional on Hal actually being Nakamoto seem pretty high.
It seems to me that the question comes down to your prior expectation on whether the person who did such a careful expert job on something so hard would be one of the few people in the field most known to be capable of and to have actually done such things, or whether it would be a new largely unknown person. And thinking about it that way I have to put a pretty large weight on it being someone known. And conditional on that it is hard for me not to think that yeah, there’s at least a 15% chance Hal was more involved than he’s said. And if so, my hat’s way off to you Hal!
But I also figure I’m not paying nearly as close attention to this bitcoin stuff as many others. Google doesn’t find me any other discussion of the Hal as Nakamoto theory, but surely if I wait a few weeks others who know more will weigh in, right? And since I can’t think of any actions of mine that depend on this issue, waiting is what I’ll do. Your move, internet.
Added 8a 26Mar: In the comments, Gwern points to further reasonable indicators against the Hal as Nakamoto theory. I accept his judgement.
Those promised quotes: Continue reading "Conspiracy Theory, Up Close & Personal" »
Friday I’ll appear on The Independents, which airs on Fox Business TV at 9pm EST, discussing “The Rise Of The Machines.”
Anders Sandberg has posted a nice paper, Monte Carlo model of brain emulation development, wherein he develops a simple statistical model of when brain emulations [= “WBE”] would be feasible, if they will ever be feasible:
The cumulative probability gives 50% chance for WBE (if it ever arrives) before 2059, with
the 25% percentile in 2047 and the 75% percentile in 2074. WBE before 2030 looks very unlikely and only 10% likely before 2040.
My main complaint is that Sandberg assumes a functional form for the cost of computing vs. time that requires this cost to soon fall to an absolute floor, below which it will never fall, relative to the funding ever available for a brain emulation project. His resulting distribution has costs approaching this floor by about 2040:
As a result, Sandberg finds a big chance (how big he doesn’t say) that brain emulations will never be possible – for eons to follow it will always be cheaper to compute new mind states via floppy proteins in huge messy bio systems born in wombs, than to compute them via artificial devices made in factories.
That seems crazy implausible to me. I can see physical limits to physical parameters, and I can see the rate at which computing costs fall slowing down. But having the costs of artificial computing soon stop falling forever is much harder to see, especially with such costs remaining far higher than the costs of natural bio devices that seem pretty far from optimized. And having the amount of money available to fund a project never grow seems to say that economic growth will halt as well.
Even so, I applaud Sandberg for his efforts so far, and hope that his or others’ successor models will be more economically plausible. It is an important question, worthy of this and more attention.
In a column, Andrew Gelman and Eric Loken note that academia has a problem:
Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.
They consider prediction markets as a solution, but largely reject them for reasons both bad and not so bad. I’ll respond here to their article in unusual detail. First the bad:
Would prediction markets (or something like them) help? It’s hard to imagine them working out in practice. Indeed, the housing crisis was magnified by rampant speculation in derivatives that led to a multiplier effect.
Yes, speculative market estimates were mistaken there, as were most other sources, and mistaken estimates caused bad decisions. But speculative markets were the first credible source to correct the mistake, and no other stable source had consistently more accurate estimates. Why should the most accurate source should be blamed for mistakes made by all sources?
Allowing people to bet on the failure of other people’s experiments just invites corruption, and the last thing social psychologists want to worry about is a point-shaving scandal.
What about letting researchers who compete for grants, jobs, and publications write critical referee reports and publish criticism, doesn’t that invite corruption too? If you are going to forbid all conflicts of interest because they invite corruption, you won’t have much left you will allow. Surely you need to argue that bet incentives are more corrupting that other incentives. Continue reading "Academic Stats Prediction Markets" »
Imagine that this weekend you and others will volunteer time to help tend the grounds at some large site – you’ll trim bushes, pull weeds, plant bulbs, etc. You might have two reasons for doing this. First, you might care about the cause of the site. The site might hold an orphanage, or a historical building. Second, you might want to socialize with others going to the same event, to reinforce old connections and to make new ones.
Imagine that instead of being assigned to work in particular areas, each person was free to choose where on the site to work. These different motives for being there are likely to reveal themselves in where people spend their time grounds-tending. The more that someone wants to socialize, the more they will work near where others are working, so that they can chat while they work, and while taking breaks from work. Socializing workers will tend to clump together.
On the other hand, the more someone cares about the cause itself, the more they will look for places that others have neglected, so that their efforts can create maximal value. These will tend to be places places away from where socially-motivated workers are clumped. Volunteers who want more to socialize will tend more to clump, while volunteers who want more to help will tend more to spread out.
This same pattern should also apply to conversation topics. If your main reason for talking is to socialize, you’ll want to talk about whatever everyone else is talking about. Like say the missing Malaysia Airlines plane. But if instead your purpose is to gain and spread useful insight, so that we can all understand more about things that matter, you’ll want to look for relatively neglected topics. You’ll seek topics that are important and yet little discussed, where more discussion seems likely to result in progress, and where you and your fellow discussants have a comparative advantage of expertise.
You can use this clue to help infer the conversation motives of the people you talk with, and of yourself. I expect you’ll find that almost everyone mainly cares more about talking to socialize, relative to gaining insight.