Monthly Archives: July 2012

Far Mode Overly Praised

Roughly speaking, in near mode we focus practically on acting in our local situation, while in far mode we talk about how people in general should act in more ideal socially approved ways. So while we should expect near mode to usually be the best mode for practical purposes, we should also expect social discussions of near-far to celebrate far mode. For example, an article on Psychological Distance: 10 Fascinating Effects of a Simple Mind Hack gives nine advantages of being in far mode:

1. Make challenging tasks seem easier 2. Generate self-insight 3. Become more persuasive 4. Gain emotional self-control … 6. Be true to yourself 7. Become more polite 8. Fire your creativity 9. Improve your self-control 10. Trigger wise thoughts

And only one disadvantage:

5. Beware the illusion of explanatory depth!

A similar idealistic distortion is found is this article that suggests we let our minds wander because wandering minds are more creative:

In one study, volunteers had to read extracts of Leo Tolstoy’s War and Peace. … People’s minds wandered from the words for more than 20 per cent of the time. … A recent study asking people to report their state of mind at random intervals during the day – via a smartphone app – showed that their attention was wandering from the task at hand a whopping 47 per cent of the time. …

For a long time, … the ability to filter out distractions and focus on a task – dubbed executive control – was considered to lie behind smart thinking. … A host of studies have shown that people who can focus well tend to ace analytical problems: they are whizzes at arithmetic and verbal reasoning tasks, and often have a higher IQ. … Yet … while people with a high level of working memory are good at analytical problems, they tend to struggle on tasks that require flashes of inspiration. …. Various studies show that people with high working-memory capacity, and therefore good executive control, can find it more difficult to solve these problems than people who are more easily distracted. …

All participants were asked to take another crack at the [creativity] task. … Those whose minds had been wandering came up with, on average, 40 per cent more answers. … [Researchers] studied people who had written a published novel, patented an invention or had art shown at a gallery. In computer tests that required participants to screen out irrelevant information – latent inhibition tests – she found these high-achievers were less likely to disregard inconsequential details and focus on the task, compared with an average person. In other words, their minds more frequently wandered from the task at hand. …

Instead of forcing yourself to concentrate, the best approach when a deadline looms may be to loosen your grip and take a quick break. … People in a relaxed mood were more likely to find creative solutions to word puzzles. … Even listening to jokes helps. … you might want to flex your creativity when you feel most groggy. Early birds, for instance, find more original solutions late at night, while night owls do better early in the morning. … If all else fails, a stiff drink can lubricate the mind’s cogs. … By the same token, you should avoid coffee – since caffeine focuses your concentration. (more)

Gee, then why do schools tend to drill creativity out of their students, and why don’t employers like groggy drunk joking employees with wandering minds? Yes, there are some jobs where creativity increases productivity, but for most jobs an ability to focus, concentrate, and analyze helps more. Which is why caffeine is a lot more popular than alcohol on most jobs.

GD Star Rating
loading...
Tagged as: ,

Brain Prize Eval Fund Near Enough

Great news: The cryonics organization Alcor is adding $10,000 to the Brain Preservation Technology Prize Evaluation Fund. With the other donations counted here (including my $5000), that should bring the prize evaluation fund to near $30,000, which might be near enough (so please donate more):

We [Alcor] are committing $10,000 towards the Evaluation Fund. … Although the Prize itself is fully funded, funds are needed to conduct the evaluation. Alcor’s contribution will make a big difference, since the tests are estimated to cost $25,000 to $50,000.

Alcor does not directly have a horse in this race. The cryopreservation approach is represented by a team from 21st Century Medicine. 21CM aims to demonstrate the quality of ultrastructure preservation that their low temperature vitrification technique can achieve when applied to whole rabbit brains.

We will follow up this announcement of Alcor’s contribution with a longer piece. That article will address claims (currently untested) for the advantages of chemopreservation over cryopreservation. We will critically examine the claim that chemopreservation or plastic embedding would be much cheaper (for individuals not committed to whole body preservation), look at some reasons to expect significant damage caused by chemopreservation of whole brains, identify problems for chemopreservation under less-than-ideal circumstances, explain why the Prize handicaps the cryopreservation option because of the way the test is to be carried out, and will argue why brain preservation technologies should be evaluated by viability criteria as well. (more)

While I look forward to reading their critique, I’ll note no one has accepted my bet offer:

I offer to bet up to $5K that plastination is more likely to win this full prize than cryonics. (more)

My thinking has evolved a bit over the last month. In chemopreservation [= plastination], one fills a brain with plastic-like chemicals, which make strong cross-links bonds between most everything they touch. So there are two times when brain info can be lost: before it is filled with plastic, and after.

Assuming you can keep them safe from melting, burning, etc., plastic brains should last for a very long time:

Brain researchers have looked at samples preserved many decades ago, and see almost no change. Tissues preserved in amber seem to have remain unchanged for forty million years. (more)

So the main issue is how much info is lost before filling with plastic. Now it is obvious that non-fresh brains with collapsed blood vessels pose a serious problem – the plastic might just not get to some places. But for brains filled with plastic within a few minutes of live blood flow, I just can’t see the problem.

For example, imagine that key brain info is encoded in certain key protein densities at tiny synapse pores, with different nearby pores having different key proteins. As long as there are thousands of copies of each key protein in each pore area, the plastic will almost surely usually preserve the info of which kind of proteins were in which areas. Even if some key proteins move away from their pores, most will stay near, and the amino acid sequences that define the proteins will mostly be preserved by the cross-link bonds the plastic makes.

And even if this isn’t true for twenty percent of the key proteins, there is almost surely enough brain system redundancy for this to not matter. Yes, you’d need a finer scan than the Brain Preservation Prize will use to read it, but the info is still there.

So as far as I can tell, the main issue with plastination [= chemopreservation] is how quickly brains can fill with plastic after ordinary blood flow has stopped. If we can find ways to do that well, plastination just wins, I think, at least for the goal of saving the info that is you.

Added 19July: Sad news:

The [Brain Preservation] Foundation has declined [Alcor’s] donation because of concerns that it might be perceived as influencing the judges’ decisions.

Added 13Jan’13: They reached their $25K goal!

GD Star Rating
loading...
Tagged as: , ,

Robot ethics returns

People are often interested in robot ethics. I have argued before that this is strange. I offered two potential explanations:

  1. Ethics seems deep and human, so it’s engagingly eerie to combine it with heartless AI
  2. People vastly misjudge how much ethics contributes to the total value society creates

A more obvious explanation now: people are just more interested in ethics when the subject is far away, for instance in the future. This is the prediction of construal level theory. It says thinking about something far away makes you think more abstractly, and in terms of goals and ideals rather than low level constraints. Ethics is all this.

So a further prediction would be that when we come to use robots a lot, expertise from robot ethicists will be in as little demand as expertise from washing machine ethicists is now.

Some other predictions, to help check this theory:

  • Emerging or imagined technologies should arouse ethical feelings more than present technologies do in general
  • International trade should prompt more ethical feelings than local trade
  • Stories of old should be more moralizing than stories of now
  • Historical figures should be seen in a more moral light than present-day celebrities
  • Space travel should be discussed in terms of more moral goals than Earth travel.
  • Ethical features of obscure cultures should be relatively salient compared to familiar cultures

More? Which of these are actually true?

There is definitely some conflicting evidence, for instance people feel more compelled to help people in front of them than those in Africa (there was an old OB post on this, but I can’t find it). There are also many other reasons the predictions above may be true. Emerging technologies might prompt more ethical concerns because they are potentially more dangerous for instance. The ethical dimension to killing everyone is naturally prominent. Overall construal level theory still seems to me a promising model for variations in ethical concern.

Added: I’m not confident that there is disproportionate interest compared to other topic areas. I seem to have heard about it too much, but this could be a sampling bias.

GD Star Rating
loading...
Tagged as: , ,

Why Complex Life Is Rare

I’ve said before that we have pretty good evidence for off-Earth bacteria life, suggesting that such life is common in the nearby universe. However, bacterial life might be common, yet complex multi-cellular life very rare. Here’s a plausible detailed theory about why:

Under conditions typical of alkaline hydrothermal vents, the combining of H2 and CO2 to produce the molecules found in living cells – amino acids, lipids, sugars and nucleobases – actually releases energy. … Life … is an inevitable consequence of a planetary imbalance, in which electron-rich rocks are separated from electron-poor, acidic oceans by a thin crust, perforated by vent systems that focus this electrochemical driving force into cell-like systems. The planet can be seen as a giant battery; the cell is a tiny battery built on basically the same principles. … The origin of life needs a very short shopping list: rock, water and CO2. … The universe should be teeming with simple cells. …

The problem that simple cells face is this. To grow larger and more complex, they have to generate more energy. The only way they can do this is to expand the area of the membrane they use to harvest energy. To maintain control of the membrane potential as the area of the membrane expands, though, they have to make extra copies of their entire genome – which means they don’t actually gain any energy per gene copy. …

Eukaryotes get around this problem by acquiring mitochondria, … containing both the membrane needed to make ATP and the genome needed to control membrane potential. … They were stripped down to a bare minimum. … Mitochondria originally had a genome of perhaps 3000 genes; nowadays they have just 40 or so genes left. For the host cell, it was a different matter. As the mitochondrial genome shrank, the amount of energy available per host-gene copy increased and its genome could expand. …

We know it happened just once on Earth because all eukaryotes descend from a common ancestor. The emergence of complex life, then, seems to hinge on a single fluke event – the acquisition of one simple cell by another. … The outcome was by no means certain: the two intimate partners went through a lot of difficult co-adaptation before their descendants could flourish. This does not bode well for the prospects of finding intelligent aliens. (more)

GD Star Rating
loading...
Tagged as:

Leonhardt Blows It

Imagine someone said:

Of course I believe in science – I’m no nut job. I’m a modern guy. But scientists sometimes get it wrong, so we can’t just believe everything they say – we have to use our judgement. For example, my judgement tells me that astrology just makes sense. Well not today – today’s horoscope suggests I drink less, while I know I can handle my benders. But usually my horoscope feels right. And usually I feel no objection to what scientists say. Which is what I mean when I say that I believe in science.

Yes, every source errs sometimes, making it seem oh so sophisticated to say you don’t take sides, you just use your judgement in each case. But that is often just an excuse to believe whatever you feel like. On prediction markets, David Leonhardt sounds similar:

The odds at Intrade … continued to show about a 75 percent chance that the law’s so-called mandate would be ruled unconstitutional, right up until the morning it was ruled constitutional. … Today, mocking Intrade, ideally on Twitter, is a sign of sophistication. …

The early successes of prediction markets were notable. … But the crowd was not everywhere wise. For one thing, many of the betting pools on Intrade and Betfair attract relatively few traders, in part because using them legally is cumbersome. … The thinness of these markets can cause them to adjust too slowly to new information.

And there is this: If the circle of people who possess information is small enough — as with the selection of a vice president or pope or, arguably, a decision by the Supreme Court — the crowds may not have much wisdom to impart. “There is a class of markets that I think are basically pointless,” says Justin Wolfers. …

But such schadenfreude raises a question: once you accept that prediction markets are flawed, do you turn back to the inside experts? ALAS, the experts’ overall record remains as poor as the behavioral economists maintained — and often worse than the markets’ record. …

The answer, I think, is to take the best of what both experts and markets have to offer, realizing that the combination of the two offers a better window onto the future than either alone. Markets are at their best when they can synthesize large amounts of disparate information, as on an election night. Experts are most useful when a system exists to identify the most truly knowledgeable. …

Nate Silver … has found that a simple average of well-known economic forecasts is substantially more accurate than individual forecasts. Other times, the approach might involve as much art as science — and, again, the Internet allows for strategies that once would have been impossible.

Think for a moment about what a Twitter feed is: it’s a personalized market of experts (and friends), in which you can build your own focus group and listen to its collective analysis about the past, present and future. An RSS feed, in which you choose blogs to read, works similarly. You make decisions about which experts are worthy of your attention, based both on your own judgments about them and on other experts’ judgments. (more)

No, the vast majority of folks should not trust the vague impression on a subject they glean from a Twitter or RSS feed, over active prediction market prices. They shouldn’t even average the two. I’m confident that an empirical test of forecasts based on such impressions, or averages, would find them less accurate. Alas, folks like Leonhardt might then say “And that’s just why you must use your judgement about when when to use your judgement.”

Yes, in a sense, you always use your judgement – if you take my advice to rely on prediction market prices instead of Twitter feed impressions,  your judgement will have to approve of that. But using your judgement isn’t the same as accepting your case-specific intuitions – you usually can and should judge them unreliable.

Also, prediction markets just are not “crowds” in contrast to “experts” – the whole point of prediction markets is to get participants to self-select as the true experts. Don’t participate unless you think you know more than other participants, and those who actually do know less lose on average and get slowly pushed out.

Yes, you should be wary of “prediction markets” where no one trades, or limited to the kids from Mrs. Calloway’s seventh grade civics class, or using a play money no one cares about. Not everything called a “prediction market” is one. But the Intrade market on the Obamacare court case was an active valid market, on an appropriate subject. When it assigned a 75% chance to an event it was saying real loud that it would be wrong 1/4 of the time. And studies have consistently found such markets are well-calibrated in this way. What more do you want?

Yes, Intrade markets on court cases are unlikely to extract inside court info, and would be less accurate than sources with access to such info. But do you really think your Twitter feed has better access? Intrade traders watch Twitter, and incorporate what info they find into prices as best they can. Skeptics who tweet their disagreements but aren’t willing to bet can’t be very confident.

Yes, prediction markets can’t be reliable sources unless some people at some times think they are unreliable, and bet on that opinion. It is those with enough confidence in their disagreements to bet that make such markets accurate. If you are such a person, more power to you. But if you are not such a person, you will almost always get more accurate estimates by just trusting the current prices of an active prediction market, relative to forming a vague impression based on your Twitter feed.

Of course, on subjects like major court cases, most people care about others things besides accuracy. Twitter feeds can connect you to people, helping you to form and show your allegiances. For such social purposes, prediction markets are worse. But since people can’t usually admit such priorities, they have to make up excuses, such as about listening to Twitter feeds to aggregate a vague ineffable wisdom of expert crowds.

GD Star Rating
loading...
Tagged as:

Responsibility and Clicking

Sometimes when people hear obvious arguments regarding emotive topics, they just tentatively accept the conclusion instead of defending against it until they find some half satisfactory reason to dismiss it. Eliezer Yudkowsky calls this ‘clicking’, and wants to know what causes it:

My best guess is that clickiness has something to do with failure to compartmentalize – missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

pjeby remarks (with 96 upvotes),

One of the things that I’ve noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think “guessing the teacher’s password”, but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science….

Hypothesis: people expect reality to make sense roughly in proportion to how personally responsible for manipulating it they feel. If you think of yourself as in charge of strategically doing something, you are eager to understand how doing that thing works, and automatically expect understanding to be possible. If you are driving a car, you insist the streets fit intuitive geometry. If you are engaging in office politics, you feel there must be some reason Gina said that thing.

If you feel like some vague ‘they’ is responsible for most things, and is meant to give you stuff that you have a right to, and that you are meant to be a good person in the mean time, you won’t automatically try to understand things or think of them as understandable. Modeling how things work isn’t something you are ‘meant’ to do, unless you are some kind of scientist. If you do dabble in that kind of thing, you enjoy the pretty ideas rather than feel any desperate urge for them to be sound or complete. Other people are meant to look after those things.

A usual observation is that understanding things properly allows you to manipulate them. I posit that thinking of them as something you might manipulate automatically makes you understand them better. This isn’t particularly new either. It’s related to ‘learned blankness‘, and searching vs. chasing, and near mode vs. far mode. The followup point is that chasing the one correct model of reality, which has to make sense, straight-forwardly leads to ‘clicking’ when you hear a sensible argument.

According to this hypothesis, the people who feel most personally responsible for everything a la Methods Harry Potter would also be the people who are most notice whether things make sense. The people who less trust doctors and churches to look after them on the way to their afterlives are the ones who notice that cryonics makes sense.

To see something as manipulable is to see it in the same light that science does, rather than as wallpaper. This is expensive, not just because a detailed model is costly to entertain, but because it interferes with saying socially advantageous things about the wallpaper. So you quite sensibly only do it when you actually want to manipulate a thing and feel potentially empowered to do so, i.e. when you hold yourself responsible for it.

GD Star Rating
loading...
Tagged as: ,

Open Thread

This is our monthly place to discuss relevant topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:

We Can Do Low-Treewidth Combinatorial Prediction Markets!

In my last post I said I hoped prediction markets would become

an “our answers” institution with easily-found accurate answers on as many questions as possible.

Today prediction markets’ main problem is laws banning them (and customs limiting firm interest in internal markers). Alas as an academic, I can’t do much to change such laws. But I can work to improve the basic tech, for the day when prediction markets are legal. Yesterday I also said:

[Here’s] one way to expand the range of questions prediction markets can cheaply answer: start with a set of base questions, and then let users ask and answer questions from the vast space of combinations of those base questions. For example, starting with a base consisting of all the specific future readings of all weather stations, users could ask most any weather question of interest, such as whether this next winter will be colder where they are living now, or in the particular city where they are thinking of moving. In my next post I’ll talk about a big advance my research group has achieved in the implementation of such combinatorial prediction markets.

The DAGGRE project that I’ve been part of for over a year now has been working to advance the theory and practice of combinatorial prediction markets. Within a few months we will field an edit-based system where users can browse current answer estimates, and for each estimate can:

  • Edit the value. After you change an estimate to a new value, estimates that users see on all questions are Bayes-rule updates from that new value.
  • Assume a value. After you assume a value for this estimate, all estimates you see on all questions are conditional on this assumption.

To support this interface, we have three computing tasks:

  1. When a user has made assumptions A and browses to a possible question answer T, compute and show the current value v = P(T|A).
  2. To see how far this user could change this number, compute the edit values, v-, v+, in each direction that give him or her zero assets in some state.
  3. To show the user if he or she is currently long or short on this topic, compute and compare his or her expected assets given A&T, and given A&notT.

The problem is, even though a simple math formula says how a user’s (state-dependent) assets change when he or she makes such an edit, it is in general infeasible to quickly calculate the above numbers for more than a few dozen base questions.

To make computing feasible, we must somehow limit the space of allowable question answers. So the big question is: what limits will allow as many as possible of the combinations that user will typically want to edit, while still enabling accurate computation for allowed combinations.

One approach is to group base questions into sets of roughly twenty or less, and allow arbitrary combinations within each group, but no combinations between different groups. This is feasible, but we can do better, via Bayesian (or Markov) nets.

These nets limit the space of possible answers by imposing conditional independence assumptions. In a net of (directly) connected variables (i.e., questions), all variables are assumed independent of unconnected variables, conditional on connected variables. A standard (junction tree) algorithm allows exact computation of conditional probabilities for nets with a low “treewidth” (roughly, the number of variables you’d have to merge to make the net into a tree).

Of course that only covers task #1 above; what about the other two? My DAGGRE group has just published a paper (also here), to be presented at a conference (UAI) in August, showing how to exactly (well, up to machine precision) compute tasks #2,3 in this same situation. (Task #2 needs a low treewidth, but #3 works on any net.) We’ve implemented this algorithm and shown that an ordinary laptop can handle a thousand variables and a treewidth of ten in a fraction of a second. Within a few months this will be the (public domain) backend of our public edit-based combinatorial prediction market with hundreds of questions and active users. I’ll announce it here on this blog when it is ready to show.

Of course our real best-answers don’t actually fit in a low treewidth net. So we have more work to do: finding efficient approximations for tasks #1,2 in more realistic nets. There is already a large literature on ways to compute conditional probabilities in high treewidth Markov nets; we just need to study and choose among them. And I already know of a very promising general way to do task #2 well enough; we just need to try it.

Even when we can handle more realistic nets, we will still have to limit their size to make computing feasible. So we’ll need ways to let users edit the net structures – to tell us where to add and delete connections. I have some ideas here, but we are far from a satisfactory solution.

Even so, progress has been surprisingly rapid, and we have good reasons to expect continued rapid progress. We seem just a few years away from having the tech to field general robust combinatorial prediction markets! Then we’ll just have to figure out how to make it all legal.

GD Star Rating
loading...
Tagged as: ,

Finding “Our” Beliefs

If only you are interested in a topic, you’ll have to think it through for yourself, or pay someone else to think on it. But if many folks are interested in your topic, you might hope to share the thinking work with others.

Some social institutions seem to serve this “our beliefs” function. Today you can see if your question is answered in wikipedia, you can search a library for answers in respected books or journals, and you can call up an expert credentialed in a related area to see if they know of an answer.

Of course these institutions are imperfect. Calling experts is expensive, quick searches only find some of the many differing opinions out there, and encyclopedia answers, while unique, only address a limited range of questions. If you find an answer you know is wrong, it can take a lot of work to change it. You might have to devote a whole career to the attempt, and even then you might not be rewarded for the effort.

Ideally, we’d want an “our answers” institution with easily-found accurate answers on as many questions as possible. Hopefully, anyone could ask any question, and answers would be consistent with each other and across time. If incentives to give accurate answers were strong enough, we might even let anyone correct any answer.

Prediction markets might allow such a better answers institution. Ordinary financial prices offer consistent unique answers that anyone can fix, but for a typical ordinary question, it is very hard to figure out which price combinations might answer it. In contrast, prediction market questions can be expressed in simple ordinary language.

If (money-based) prediction markets were legal, anyone could add a new question for a modest fee (<$100), and quickly get unique answers consistent with all other questions. Anyone could fix any answer, and would have incentives to do so accurately.  Or anyone could pay to make any answer more accurate. So far, tests have found prediction markets to be consistently as or more accurate than other prediction institutions with similar resources.

Of course ordinary prediction markets do have one big limitation: they only directly answer questions that eventually become clear for other reasons. But this allows more than it might seem. For example, because we will later know which candidate is elected by what margin, and how big is the post-election unemployment rate, prices today can say which candidate is expected to most help unemployment if elected.

This example suggests one way to expand the range of questions prediction markets can cheaply answer: start with a set of base questions, and then let users ask and answer questions from the vast space of combinations of those base questions. For example, starting with a base consisting of all the specific future readings of all weather stations, users could ask most any weather question of interest, such as whether this next winter will be colder where they are living now, or in the particular city where they are thinking of moving.

In my next post I’ll talk about a big advance my research group has achieved in the implementation of such combinatorial prediction markets.

GD Star Rating
loading...
Tagged as: