This is our monthly place to discuss relevant topics that have not appeared in recent posts.
In my understanding of Futarchy, there seems to be a potential “credit assignment” problem. Thought experiment:
Imagine the existence of X that, through a well understood mechanism, lowers the national welfare by some small epsilon every year. Futarchy would allow for conditional contracts to estimate “National welfare given that we ban X” and “National welfare given that we don’t ban X” but since the harm to national welfare is so small, it’s going to get swamped by other effects and there is going to be virtually no difference in these market price. Under futarchy, X will not get banned.
I think there’s a “patch” in that, if X was sufficiently well understood, you could just put “ban X” into the definition of national welfare, but this seems like a potentially nasty kludge. Is there a cleaner solution?
See #16 at http://hanson.gmu.edu/futarchy2013.pdf
Ah, don’t know how I missed that. Guess that’s what I get for skimming a paper.
Another prediction market related question. For exponentially large outcome-spaces, would it be reasonable to use a parimutuel market on which option will be chosen to generate a shortlist of options, and use LMSR on that shortlist for making the final decision? Or are most of the benefits of a modular LMSR market lost?
Have you ever noticed that the FDA, in its own words, is literally not willing to officially commit to the the well-accepted belief that Vitamin C can prevent scurvy, that Vitamin D can prevent rickets, etc? Just look on your vitamin bottle(s). And, in addition, the FDA confirms on its own website that “the law says that if a dietary supplement label includes [claims]… it must state in a “disclaimer” that FDA has not evaluated this claim.
The disclaimer must also state that this product is not intended to ‘diagnose, treat, cure or prevent any disease,’ because only a drug can
legally make such a claim.” Huh? Only a drug can prevent a disease? Really? But: (1) scurvy and rickets are surely genuine “diseases,” (2) vitamins C and D can, respectively, surely prevent (and sometimes cure) them, whether or not you want to call a vitamin a “drug.” But the FDA *insists* that vitamin producers state on their labels that: (a) the FDA has NOT evaluated the “claim,” and (b) “this product is not intended to
“diagnose, treat, cure or prevent any disease.” Q1: Why hasn’t the FDA yet “evaluated” the extremely non-controversial claim that vitamin C can prevent scurvy? According to some sources, the linkage between preventing scurvy and vitamin C was noticed indirectly hundreds of years ago, and then scientifically confirmed more than 100 years ago. Q2: Does any reputable scientist at the FDA have the slightest doubt about the linkage between scruvy and vitamin C deficiency? Well OK, no, I don’t think they actually have any doubt. Rather, they are either too lazy and/or cowardly to take a stand on even the most trivially non-controversial of matters; scientists at the FDA have evidently been cowed into silence on all such matters by their massively-overwhelming army of lawyers and bureaucrats. And this, my friends, is just one example of your federal tax dollars at work. Q3: If the FDA is so timid that it refuses to officially take a stand one way or the other about whether a vitamin (e.g., vitamin C) can “cure or prevent” a disease (e.g., scurvy), then why should you, a taxpayer, trust the FDA on ANY other subject?
For Q3, if you know which incentives are in place in the system, you can at least make some guesses about what actions those incentives will lead to. So the FDA approving a drug doesn’t convey zero information, but of course it must be judged by what they’re paid to do, not what they say they do.
A quote from the FDA site:
“Nutrient deficiency disease claims describe a benefit related to a nutrient deficiency disease (like vitamin C and scurvy), but such claims are allowed only if they also say how widespread such a disease is in the United States.”
( http://www.fda.gov/Food/IngredientsPackagingLabeling/LabelingNutrition/ucm2006881.htm )
Isn’t it just a simple matter of having to draw lines somewhere? Clearly any essential nutrient can prevent or cure the illness that results from not ingesting enough of that nutrient, at the same time there has to be some workable definition of what a drug is, ie. the stuff the FDA is there to regulate and vitamins don’t really fit that idea, nor would labeling vitamins as “drugs” be very practical when you just want to buy your fruits and vegetables at the grocery store. It seems to me like you have an unrealistic expectation of the un-arbitraryness of definitions such as “drug”, but that’s not uncommon, in fact the FDA has to promote that expectation to enforce its authority because there are so many people who are only comfortable (or capable of) thinking in absolutes. You might as well be asking why one needs a driver’s license for a motorcycle but not for an electric bycicle.
Or another example: why is the voting age 18 instead of 17 (and if it were 17 you’d be asking why it wasn’t 16 and so on…)? Because we agree babies can’t vote and adults can and therefore you have to draw a line somewhere. The age of 18 seems to be a popular consensus that also falls within the age-range where scientists found relatively large leaps toward physical maturity.
I think the prohibitions regarding claiming Vitamin C prevents scurvy are probably based on countering a likely manner of fraud. You can see manufacturers advertising, “Get your Vitamin C and avoid scurvy,” implying a risk of contracting the disease that doesn’t exist.
Yes, that’s certainly part of it and a reason why vitamins don’t really fit the common idea of a drug even though that if you have to conjure up some legal definition they might technically fall under it.
Interestingly, my grandson’s tricycle did not come with the following warning: “The Department of Transportation has not evaluated claims made for this product. This vehicle is not intended to carry, convey, or transport any child.”
It takes a 11.1% gain to offset a 10% loss.
It takes a 50% gain to offset a 33.3% loss.
It takes a 100% gain to offset a 50% loss.
It takes a near infinite gain to offset a 100-epsilon % loss.
The possibilities of any of these gains occuring is much lesser than the possibility of the respective losses. Knowing these numbers, why is loss aversion still listed as a cognitive bias? Isn’t it simply rational?
The absolute size of the gain needed to offset a loss is equal to the absolute size of that loss. “Natural” fluctuations of a stochastic system usually show equal probabilities for gains and losses so no, in many systems a 11.1% gain is not less probable than a 10% loss. In fact if a system has been perturbed downwards from its “natural” value regression toward the mean can make the gain more likely than another loss. Pretty much any system that has some self-correcting mechanic shows such behavior, this is usually the case in economic matters, such as the stock market, but also the weather. Examples of where what you say holds true would be gambling (what you win or lose is proportional to ante) or combat (if half of your hunters get killed by wolves it becomes 4x harder to defeat the original number of wolves), it’s possible that our instincts evolved based on the latter class of systems, leaving us with a cognitive bias when dealing with the former class of systems.
It is a controversial statement that a stock market is mean-reverting. It is closer to a geometric random walk with drift.
Over long periods of time there is certainly drift, upwards, which only adds to the probability of gains. As far as I’m aware a mean reverting random walk model is the standard model of stock markets, it differs from normal random walk because increasing steps away from the average meet more and more resistance, as opposed to the normal random walk model where a step forward and a step backward always have equal probability. I suppose you could factor in another term that models a treshold value beyond which the stock market collapses (rats abandoning the ship after a very bad time or the bursting of a bubble after a “too good” time), I don’t know if that’s generally done or not. In any case the only thing about the model that’s important to blogospheroid’s question is the idea that, within limits, the stock markets bounces back to certain values so overall gains are not less likely than losses of the same absolute size, just like temperature or precipitation measurements in meteorology.
Risk aversion is not the same thing as loss-aversion.
This is a quirk of using arithmetic rates of return. If you measure returns logarithmically, then you won’t notice this.
Oh, i’ve got open thread comments!
First, too bad about Noah Smith. I watched the episode unravel in disbelief. He owes you a huge public apology (but then you might say he was just signaling!)
Second, I don’t get it. You have signed up for cryonics but don’t seem interested at all in what is happening with longevity pills (over ten) that are statge II trials. Why?
And I’ll take my answer off the air…
Given the long track record of overhyped medical claims, I’m skeptical of the latest pills, until shown otherwise.
Can you provide some examples of hyped health pills from the past? I can’t think of a single one. What do you know that David Sinclair, Lenny Guarante, Brian Kennedy, Linda Partridge and Cynthia Kenyon don’t?
What longevity pills?
Various paths to life extension (or not) seem replete w/ tradeoffs at the individual, family, & societal levels. There is a spectrum of degrees and approaches. There’s great uncertainty as to means and outcomes, costs & benefits of very different kinds, & all manner of wild (and not-so-wild) cards which potentially bear on the matter. It seems complex. Does anyone have pointers to informed, balanced treatments of this topic? Thanks.
Whether it is physically possible to travel to other stars is less important than whether our social system evolves enough to let us. Call it feudalism like David Brin, call it Moloch like Scott Alexander, but maybe civilizations made of dumb self-interested beings never overcome it.
So this post struck me as flawed:
Hanson, you need to talk more about: (1) inventor rights to their invention separate and apart from employer rights (akin to Japanese Article 35 and the blue laser litigation, and akin to employee ‘right to work’ laws here in the USA regarding covenants not to compete, as AlexT has written about), and, (2) why proposed bills to give innocent infringers of patent works “shop rights”, akin to what you discussed with AlexT last year, have failed before Congress, and, (3) whether you think, long term, the supply of innovation is fixed (the current consensus) or variable, that is, would people respond to incentives and invent more, if we strengthen the patent system, or pretty much is it, as historically seems to be the rule, that inventors invent for the love of it, and not for the money (as per nearly every Nobelist, but pace people like Tom Edison).
In this spirit of Robin’s desire to see more wholehearted study of the future, let’s compare two depictions from recent media of Earth society upon the cusp of interplanetary colonization.
In 1999, Sid Meier’s Alpha Centauri (SMAC) depicted “factions” competing for resources on the newly-colonized planet. (I’m forgetting the back story, but I think they were on the same ship and only splintered into separate colonies during the journey.) These factions are *not* aligned with early 21st century nation states; rather they are split upon ideological lines. They’re caricatures, but interesting caricatures, of environmentalism, corporatism, internationalism, fundamentalism, militarism, technocracy, etc.
In 2014, Sid Meier’s Civilization: Beyond Earth (BE) was released with (IMO) much less interesting “Sponsors” [presumed to have paid for the interplanetary colonial vessels]. These are given certain in-game bonuses that align them to particular play styles. But on the level of “story” they are simple derivatives of existing 21st century nation states or slightly larger geographical aggregates. To wit, these comprise (a corporatized version of the) USA, China, India, Africa, France/Spain, Brazil, Australia, Russia.
My question for the audience is, which of these scenarios is more plausible for the relevant period in the future, any why?
I enjoyed SMAC much more, but the BE factions seem more realistic: I doubt geography will be subsumed by ideology in a few hundred years. Also the particular geographic aggregates seem realistic. It also implies an “African Renaissance” period that results in a more politically unified African continent, a scenario that I think must be common in sci-fi and has a certain plausibility.
On the other hand, it’s also plausible that within-state (and supra-state-level) power struggles between ideologues of different stripes dominate out political discourse much more today (relative to inter-state power struggles) than was the case 200 years ago. And on a broader historical level, examples like waves of religious conversion and democratization suggest that ideology can trump nationalism on long horizons. So perhaps SMAC’s factions are not ridiculous. And they are so much more fun.
(Apologies if this has come up before. In fact, I’m pretty confident it’s been discussed to death in some thread on Less Wrong, but I don’t know where and I’m particular curious about Robin’s take.)
An interesting example of ems in popular culture in the latest Black Mirror. http://www.channel4.com/programmes/black-mirror/on-demand/60121-001
The first em is used in a rather understandable way, automating household chores that are a matter of detailed preference. The second is interrogated, taking advantage of time dilation, which seems a possible but less reasonable use of the technology.
Alas I can’t get it to play on my mac.
Maybe a geographic lock as it’s a UK show?
… be a charity angel.