Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.
These questions, stemming from the post on “The Design Space of Minds-In-General”, is for Eliezer. Do you intend your use of “artificial intelligence” to be understood as always referencing something with human origins? What does it mean to you to place some artificial intelligences outside the scope of posthuman mindspace? Do you trust that human origins are capable of producing all possible artificial intelligences?
What are we going to tell the AI it wants to do? An AI too like a human might very well turn into a greek god…
I think That the Great Filter Must be in our past.
Even if we discovered a solar-system killing super technology tomorrow, we can imagine another species being more expansionist and producing a colonization wave well before they advanced as far as we have in that super technology.
If the super technology is powerful enough to kill a space faring race and any colonizers they’ve launched, then it must be Homicidal AI willing to chase down colonizers. In which case why haven’t we been killed by a wave of homicidal AI’s?
This doesn’t mean that AI isn’t dangerous, or that there isn’t a technology that will kill us. It just means that the absence of advanced aliens doesn’t give us information about our future.
Personally, I think that the move from bright animals to technology is the most unlikely event in our past. We see lots of examples today of bright animals that didn’t quite make the jump. It seems unlikely that all bright animals are recent.
I expect a universe with lots of dead worlds, a number of worlds with simple life, anda precious few with advanced animals. I expect a few of those animals will be bright, maybe even simple tool users.
In one of the older posts (Allais Malaise) Eliezer said “If the largest utility you care about is the utility of feeling good about your decision, then any decision that feels good is the right one.” Lets imagine, that suddenly feeling good about things is considered the main purpose. Suddenly all we, as humanity, want is to feel good about ourselves and about things we do, no matter how rational any of the feelings and doings are. How would we restructure society to maximise the good feeling of maximum number of its members?
I want to appreciate modern art, but my pre-adjusted belief is that most of it is essentially garbage built around maybe one semi-clever idea. Let’s say my p for abstract expressionsism being worth a darn, pretending I never read this blog is .15. Now:
How much weight should I give to the opinions of art historians considering that this is obviously a very biased group?
Am I really overcoming my biases just admitting that I believe that this is better than I think it is? I still look at Jackson Pollock and have a hard time mustering any admiration. Seriously, try going to a museum with that attitude.
So, first of all, does anybody have any insight into how to weight others opinions when you have to take into account expertise versus selection bias. Secondly, is it inevitable that we feel insincere when we try to apply our overcoming bias techniques, given that our bias may be so ingrained in our cognition.
What is your (Eliezer’s) solution to Newcomb’s Puzzle?
When talking about timeless physics, you said that you can click your fingers and make it a million years in the future by moving the stars around. But you can’t do it. Moving the stars that much in one night would require more energy, since energy is quadratic with linear speed. So you would end up releasing photons, and I could detect them. And that’s the difference.
Don’t know if anyone else said this. I didn’t read the comments of the physics posts, they were hard enough by themselves.
I’ve read many posts on this blog including Vulcan Logic and Why truth? And…, but I’m still not clear about how a rationalist deals with emotion.
I understand that emotions that are based on non-truths, like fearing a black cat crossing your path, are irrational. But what about events that genuinely cause you to be sad? Should you be sad when you score low on a test, or fail to get tenure, or when a loved one dies? If so, how sad should you be? It would be irrational perhaps to sit crying over a math test for a week. Could someone also give an example where a rationalist, an irrational person and a Vulcan behave differently?
Should one aspire to be in control of all emotions and have the ability to express them only when one desires to (or only when one thinks it is rational to express an emotion)?
Lewis Powell: (Eliezer’s) solution to Newcomb’s Puzzle?
Laura ABJ: An AI too like a human might very well turn into a greek god…
Patents, prizes, grants, or trade secrets & contracts — which or what combination of the above are the best means for promoting innovation?
For my money, I’m saying grants for curious exploration, patents for taking the science into a proof-of-concept design for a pioneering new product or service, and contract & trade secrets for innovations to existing methods of manufacturing products or processes (like the kind that occur everyday and that are made by everyone working at Toyota manufacturing plants).
Prizes are simply a punt on the need for proof-of-concept funding. Patents are better because more attentive to changes in demand.
What are we going to tell the AI it wants to do?
Very preliminary solution
In which case why haven’t we been killed by a wave of homicidal AI’s?
Anthropic principle. But, since we appear to live fairly late in the time period when life like us would be possible, you can still draw the same conclusion. I suspect – mostly based on the relative lengths of time between developments – that the Filter is somewhere between prokaryotes and animals, and that we should expect to see lots of bacteria-equivalents and not much else; but it’s possible that complex life would have developed earlier had some external condition been present (the sun was too dim, maybe?).
Thanks, Recovering irrationalist. I had actually read that, forgot to mention it. I understand that it is rational to feel sad when something bad happens, I was only asking how sad should one be. Can this question even have a meaningful answer? If someone were to stop doing everything and sit crying for months after a tragic incident, should we say that is irrational?
It depends on how long you value crying. But it certainly feels like there’s some non-Bayesian, non-instrumental standard by which crying for months is irrational (maybe it’d be better to say ‘unreasonable’, to clarify the distinction).
May I suggest you re-read Marcus Aurelius? You will find useful answers there.
His Meditations: http://classics.mit.edu/Antoninus/meditations.html
Inspired by the LHC thread:
The Egyptians believed that if their graves remained undisturbed, they would live in eternal bliss in the afterlife. There is a very small but nonzero probability that they were right.
Nowadays, archaeologists are constantly digging up tombs. If those resources were diverted to guarding tombs instead, there seems a very good chance that the tombs could remain intact until after humanity either wipes itself out or achieves singularity, and therefore practically forever.
The expected utility of digging up Egyptian graves seems to be (high probability)(archaeologists and museum-goers get small positive utility from seeing more artifacts) – (very low probability)(ancient Egyptians get infinitely high negative utility from losing eternal afterlife).
Shutting up and multiplying, I determine that we should ban archaeology in Egypt and possibly elsewhere, and divert massive resources to protecting ancient tombs. My intuition is very strongly against this idea. Am I making some mistake?
Yvain: You want “Pascal’s Mugging”. http://www.overcomingbias.com/2007/10/pascals-mugging.html
Biases in particular academic disciplines. (especially sociology)
Advice for achieving career success in academia and debiasing disciplines.
Yvain: you could add in the chance that the Egyptians were wrong in the wrong direction. There is a very small but nonzero probability that they will live in eternal bliss in the afterlife if and only if their graves are disturbed. There is also a very small but nonzero probability that they will live in eternal bliss in the afterlife if and only if you pay off my mortgage. I accept checks.
We note Pascal’s mugging, but that this line of inquiry is starting from “how can I justify this conclusion that I do/do not want to accept” rather than a free spirit of inquiry. (There may be a better link for that second one.)
Thanks for the link to Pascal’s Mugging, but if I read it right, Robin, Eliezer, and most of the other commenters on that thread finally agreed that the answer was to link the number of people involved with the chance that anyone could affect that many people’s lives. Even if that’s true, I don’t see any relevance to this dilemma or the LHC dilemma.
Zubon, I accept there’s a chance that the Egyptians were wrong in the wrong direction, but I’d estimate it as a much smaller chance; as little evidence as there is that there are deities who want an intact grave, there’s a whole lot less evidence that there are deities who want people’s graves disturbed. Subtract that smaller probability out and you still have a problem (especially if you change the infinite negative utility to a very large one so you don’t have the hassle of subtracting infinities from each other).
There is evidence that emotion is crucial to decision-making; specifically the small, unimportant decisions that fill our daily lives unnoticed. Emotional bias is necessary when the point is not to make the “right” decision, but just hurry up and make one.
When choosing socks in the morning, the rationalist is likely to take a favourite or any suitable pair; the irrationalist might get upset that their favourite socks aren’t in the drawer; the Vulcan risks staring for hours into the sock drawer, calculating and comparing the near-equal properties of all available socks.
Strong emotions such as grief can be difficult to cope with, but disconnecting emotion is likely to cause more problems than it solves.
Yvain, as far as we can determine the ancient Egyptians’ beliefs were not founded in reality.
If you’re going to let unfounded speculation guide your actions, you have a real problem: speculation by itself imposes no requirements for justification, and so every assertion is countered by its negation. It is entirely possible that the existence of those undisturbed tombs has sentenced the Egyptians’ spirits to eternal slavery in the hellish afterworld of the Beetle Kings, and our disrupting the spells that hold them there is the only thing delaying the Beetle Kings’ ultimate plans to consume every living thing in this plane of existence. Or, not. Given that both the reality and unreality of this assertion are equally well-justified, they can exert no net force on the balance of our opinion.
Robin, this is regarding your Economics of The Singularity article on Spectrum. Suppose we hit the next singularity n years from now, how should that effect my investment strategy now?
Establishing a Rationality Dojo:
As an aspiring rationalist, I feel it would be useful to establish a real-space rationalist discussion group.
Like many other readers, I have access to University facilities, staff, and students (in my case at Auckland University in New Zealand).
Has anyone else tried to establish a similar group? If so, how have they approached it? Has it succeeded? Has it helped to improve anyone’s reasoning?
This strikes me as a suitable topic for top level blog post – are any of the regular posters interested in pursuing the topic?
Just a note on objective morality, since E.Yudkowsky metioned my name (along with John C Wright) as a supporter of objective morality in the thread ‘No Universally Compelling Arguments’.
He then proceeded to demolish a ‘straw man’ version of objective morality, culminating in the thread ‘The Moral Void’
Let me just note here, that believe it or not, I actually agree with his recent series on the topic – he’s convincingly refuted the idea that objective morality could be in the form of rules or commandments (such as ‘you should do x’ etc.)
But of course, there is no real need for long arguments on this point, since most people would agree that moral rules are human artifacts (inventions or products of cognitive processes, or, ‘outputs’ of an optimization process).
I have to point that my own version of objective morality is much more subtle than this.
My hypotheses is that what is built into the universe is *not* moral rules or moral optimization processes, but instead, static platonic *moral ideals* or *archetypes* (examples: beauty, freedom etc, etc). There are no ‘shoulds’ associated with these moral archetypes (they are not moral rules), but these archetypes are still objective none the less.
This is clearly reflected in the top-level domain model of my own design for an SAI, which is public domain and can be viewed at:
SAI Top-Level Domain Model
mjgeddes: what’s the real-world difference between a universe that has these platonic moral ideals built into it and one that does not?
p.s. The page you linked to only seems to viewable in Internet Explorer.
Eliezer said, “But what does an agent with a disposition generally-well-suited to Newcomblike problems look like? Can this be formally specified?
Yes, but when I tried to write it up, I realized that I was starting to write a small book. And it wasn’t the most important book I had to write, so I shelved it. My slow writing speed really is the bane of my existence. The theory I worked out seems, to me, to have many nice properties besides being well-suited to Newcomblike problems. It would make a nice PhD thesis, if I could get someone to accept it as my PhD thesis. But that’s pretty much what it would take to make me unshelve the project. Otherwise I can’t justify the time expenditure, not at the speed I currently write books.”
Telling us that he wants/has an answer that says you should pick box A and it is rational to do so is not yet to provide that answer. The answer needs to be a theory of rationality that recommends box A (and, which, ideally, does not admit of some slightly revised version of the puzzle). This is what I want to know. What is Eliezer’s system of rationality that avoids Newcomb?
Nontrivial. It revises some standard math (for more elegance, not less). I said it would make a decent PhD thesis.
“lots of bacteria-equivalents and not much else”
There’s a fascinating and readable book on this topic “Rare Earth: Why Complex Life Is Uncommon in the Universe” by Peter Ward and Donald Brownlee.
Ronald Merrill in “The Ideas of Ayn Rand” tried to clarify, and in some cases modify, Rand’s ideas so they made more sense. In discussing ethics, he suggested “unifying” normative ought and operational ought, that is that “You ought to format a floppy disc before you use it” and “You ought not to cheat” are the same type of statement, the latter having an “understood-if” of “if you want to live a good life”. He also extended the reasoning behind her endorsement of virtue ethics as you should act so as to be the kind of person (develop your character) you want to be.
Interesting conversation yesterday, maybe some physicists can clarify:
IF the universe is in some sense already ‘fixed’ from a meta-4th dimensional vantage, IE, it is deterministic and we only experience an illusion of time moving forward,
AND IF time is in some sense ‘nondirectional’, IE, antiparticles are particles moving backward in time, and we can just as easily flip time’s direction and characterize particles as antiparticles moving backward in time,
THEN why does entropy only increase in one direction? If we ‘flipped’ our vantage on time and looked backwards, would entropy go down?
I once heard that entropy is one way of describing our lack of understanding of how things interacted to get where they currently are.
In that case, if we look into the past, does entropy decrease to a point and start to rise, once we lose sight of what we know? Why can we predict how things happened backwards in time but not forwards if we are using the same fundamental rules? If we *did* know how the universe unfolded, would entropy cease to be? Wouldn’t this mean that entropy doesn’t exist?
So What I was getting at: After the singularity, positive or negative, if the universe is ‘full’ of known information, will there be no more entropy?
In Timeless Identity Eliezer promised a practical application for knowledge of quantum mechanics and many-worlds and made the following argument:
“If you’ve been cryocrastinating, putting off signing up for cryonics “until later”, don’t think that you’ve “gotten away with it so far”. Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it’s too late for them to get life insurance.”
But if I understood that sequence correctly and all points in configuration space exist, won’t there be, for every possible version of me who signs up for cryonics, a version that doesn’t? And won’t that person then again after a very small time “fork” into a version who signs up and one that doesn’t? And for every version that does, shouldn’t there likewise be versions who cancel it? If this is the case, what difference does it make what I do?
Since I haven’t seen this mentioned, I strongly suspect there’s something I’ve misunderstood, but I can’t pinpoint it. I plan to reread the sequence, but pointers on this would be appreciated.
Patrik, I think the answer is that not all worlds have the same measure. If you make a firm commitment, that probably means more other versions of you do likewise than do not. Although I confess that I too am confused about the matter of other versions of oneself (but cf. also Hal Finney’s June 3 comment in the linked thread).
Patrik: your decision determines the relative weights of the branches where you do(n’t) sign up.
ZMD: right, but is there any difference between me, and another version of me that makes the same choice?
Laura: as I understand it, entropy increases in both directions from almost any point in state space, but we observe a low-entropy past because entropy was very low at the Big Bang (explaining this is an open problem). I don’t think you would see entropy increasing in the past anytime before you got to the BB. I don’t see what this has to do with the Singularity – who says we’ll know everything? – but yes, if you did know all the microphysical details, entropy would be zero.
Yvain: I think the post most appropriate for your question is actually here on the Singularity Institute blog. It boils down to him being an “infinite set atheist”, meaning that he would disagree with your question because of your application of a utility of negative infinity.
To quote a comment of his: “Negative and positive infinity are not real numbers, and 0 and 1 are not probabilities.”
AND IF time is in some sense ‘nondirectional’
Although the laws of physics are such that they remain true if you replace t with -t, there is more to our model of physical reality than just the laws. For example, nowhere in the laws does it say or imply that over there 93 million miles away is a large blob of of matter consisting mainly of hydrogen and helium nuclei. Our space-time continuum (the universe) started in a state of very low physical entropy, and it will not end in such a state, and that is what makes physical reality time-asymmetric.
To summarize, although the laws of physics are ‘nondirectional’ in time, time itself is not ‘nondirectional’ because there is more to physical reality than the laws of physics. (There are the starting conditions for example.)
Hirvinen, “Living in Many Worlds”
Laura, “Timeless Causality”
In my opinion it would help the website and community for open threads to be weekly (1st monday of the week, rather than 1st day of the month).
Some AI-oriented readers may be interested in this news:
“Microsoft to buy search start-up Powerset”
This is probably the best place to post this:
Ever since I started reading about heuristics and biases, I have been interested in making probability calibration tests more easily available. To this end, a friend and I have set up a website that allows you to take a calibration test. The website is CalibratedProbabilityAssessment.org. The site is still in development, it needs more/better background information and more/better tests.
As you might guess, developing good questions for this sort of test is the most difficult part of this sort of thing. As it stands now there is only one calibration test on the site: questions about the distances of the most populous US cities. The problem with this question list is that the questions are not independent, so this makes it mostly useless. I was hoping to find a researcher who had administered such question before, but I have had no such luck. I will be adding more question lists as I am able.
I appreciate all advice, suggestions, criticism, donated test questions, and help. Contact information is on the site.
I thought about my actions influencing the relative weights of branches, but unless I manage to make the probability of and unwanted outcome exactly zero, it will happen. And then it’s there. Therefore I don’t see how exactly I really influence anything unless a branching creates worlds in proportion to the probabilities of outcomes, which would mean there being more worlds for points of configuration space with higher amplitudes and there it gets confusing. For example if P(x) = 0.9, will there be 10 worlds, of which in 9 x happens, or 100 and 90, or what?
Thanks frelkins and Wendy Collings. Wendy, I think “the Vulcan risks staring for hours into the sock drawer, calculating and comparing the near-equal properties of all available socks” is a statement about a Straw Vulcan, not a real one. (http://tvtropes.org/pmwiki/pmwiki.php/Main/StrawVulcan)
An actual emotionless rationalist, who I’ll call a Vulcan, would not do so. He’ll conclude that they are all equally good socks, and just pick one arbitrarily.
For example, on the page I linked to, the author lists some examples where a Straw Vulcan fails compared to an emotional human. An example similar to what you gave is:
A Straw Vulcan will have to consider everything about the problem in full detail even in time-critical situations, while the emotional person will make the snap second decisions necessary in this sort of situation. This will demonstrate how the “logical” Straw Vulcan is useless under pressure and therefore inferior to the emotional protagonist.
James, even if we are past 90% of the great filter, we have only a 1% chance of making it from here on.
Anand, save lots and diversify.
Nick, I assume you mean difference in number; I imagine you’d agree with me that there is no difference in identity. Such matters are confusing, but Nick Bostrom says yes (PDF), and as has been noted, there has almost certainly got to be some weighing of the worlds that explains why we don’t live in an absurd branch.
Vulcan, I think Wendy is referring to actual research which found that people with a certain brain damage lost both their capacity to feel emotion and their ability to make even trivial decisions. I say this only from memory, however, and won’t take the time right now to dig up a citation.
Aspiring Vulcan, great points. I’ve also thought the arguments I’ve read that emotions are necessary for rational decision making seems to pit emotional decision makers against “straw” unemotional decision makers. On the other hands, there seems to be an expert consensus on this point. I’d like to learn more about how/why.
What is the framework for this statement? (That is, what is the bare minimum information that would allow me to derive “90% past great filter implies 1% chance of success.”)
Cyan, if the total filter is a factor of 10^20, 10% of it (in log terms) is a factor of 10^2.
Richard, Nick- You seem to be in disagreement, and I don’t know on what basis you make your claims. Could you please give me a reference?
Also, circa the interesting discussion I had:
IF protons, neutrons, and electrons are made up of several quarks and gluons in different rations of up and down, all red-green-blue groupings,
AND IF antiprotons, antinetrons, and antielectrons are made up of quarks and gluons with the OPPOSITE color/directionality
THEN why is it obvious that the antiparticles *can* be explained as particles traveling backwards in time? Might the existence of these odd groupings of necessarily co-existing traits indicate that there *is* directionality in time? If a proton and an antiproton come together, how do they anhiliate if there is such a complex substructure of anti-quarks/gluons? Does each quark anhialate with its anti-quark, or is there something more complex going on? If it is simply each to each, then is this instantaneous, or does it require time for alignment of some kind? Are there even such things as anti-gluons? If not, what does it mean for time directionality that the same gluons are used in both particles and anti-particles???
I posted this on “The Opposite Sex,” but I think that thread is all but dead, so:
Of interest circa the discussion of male experience:
Nobel prize winning neuroscientist Eric Kandel relates in his memoir “In Search of Memory” that his first sexual experience was at the age of 9 with the hose-maid ‘Mitzy.” He speculates that his parents hired her to do it, since it was a common practice in Austria at the time to hire a female helper for you almost-pubescent son to prevent him from becoming a homosexual. I’m not advocating such a practice for the purpose of preventing homosexuality, but might it have other useful functions in male psychological development? Does anyone have any arguments for or against this?
Laura ABJ, your question was actually addressed recently in To What Expose Kids?. There are many credible reports of people suffering great psychological harm from having sex too young. Even if most people were not traumatized by such experiences, the risks involved would be too great.
Cross-posted from “The Opposite Sex”:
“[M]ight [prostitution] have other useful functions in male psychological development?”
Beats me. But don’t girls have better things to do with their time? The boys can just masturbate.
anonymous- this seems like a credible conclusion. Though I think the question was worth asking.
No, Laura, Nick and I do not disagree over your question. Nick is a strong rationalist and scientific generalist, so naturally he and I are not going to disagree over something as simple and as settled as this question 🙂
When Nick writes, “entropy increases in both directions from almost any point in state space,” he is referring to a general principle or law that applies to a wide range of possible worlds. In other words, if all you knew about our reality was its laws of physics, then you would probably expect entropy to increase in both directions. Then when you learned more about our reality, particularly about the cosmic microwave background and Hubble’s Law, you would be very surprised to learn that entropy has actually been decreasing for 5 billion years (because our reality started in a condition of extremely low entropy). But that is not a logical contradiction of the general law Nick mentioned because (as Nick indicated by the word “almost”) the general law is probabalistic.
I had to write “probably expect” instead of “expect” to cover the case that our reality’s laws of physics necessitate or entail an extremely low-entropy starting condition and that no one has been smart enough to prove that entailment to the satisfaction of the physicists of the world, which BTW is what Nick means when he writes, “explaining this is an open problem.”
Richard- thanks for clarifying. Could you refer me to any information on the prospects of time-travel in a zero-entropy state?
Actually- as it was suggested to me that if I don’t want to get recognized by the more judgmental contingent of society I deal with (not that I have any reason to believe they will *ever* read this blog, but plausible deniability is a plus), I am going to start posting under the pseudonymn Lara Foster. I will remark this on my posts for the next week.
To generalize a thought I posted in the “Distraction Overcomes…” thread, what might be useful is a list of biases for which there are known workarounds/remedies one can apply to oneself.
I know that there are a bunch that it does seem tricky to work around and hard to compensate correctly for, but at least for the ones that there are known techniques that someone can apply to themselves, well, I’d be interested in studying a list of usable techniques like that.
I know there’re some scattered here and there throughout various blogposts here and in various papers, but a single place that lists “stuff we know how to fix or workaround right now” that someone can go straight to and study would be a big help. (I know I’d want to study and use such a listing.)
Start with the list of cognitive biases
Tim: Thanks. Been kinda aware of that list, but never really pushed myself to systematicaly go through it before. Doing so now. (Of course, the obvious wikipedia problem occurs… so already just from Bandwagon effect I’ve got several other tabs open now. ;))
Anyways, hopefully at least some will list known remedies. If not, well, something we ought to compile.
*Note* Lara Foster was previously called Laura ABJ.
Slightly disappointed no one answered my entropy-time travel question. In my mind, it seems like Kurt Vonegut’s description of the Tralfamidorian’s perception of time would be most applicable, but this might be romanticizing on my part. Does anyone have a references on the topic or personal musings?
“Nontrivial. It revises some standard math (for more elegance, not less). I said it would make a decent PhD thesis.”
I am probing for more information because you seem very confident that you have a solution, but haven’t indicated any of the details of the approach.
Just as you would be disinclined to accept that someone had solved a similarly pernicious problem on just their word that they had one, the position I am in is one of intense curiosity. What are these revisions to math that we need to formalize decision theory properly? Does the theory have any other surprising consequences? If so, do those consequences give us new insights, or do they appear to be problems for the theory? What are the incorrect assumptions we are making now that we are blindly building into our formal models which your new model corrects?
I appreciate that Newcomb’s puzzle and decision theory aren’t your top priority (though, I am not sure why it wouldn’t be a relatively high priority, given the role a proper decision theory would likely play in programming an AI). But, given the evidence I have, it is hard to believe you have solved it.
Aspiring Vulcan (also Hopefully Anonymous):
Absolutely agree about the Straw Vulcan definition! The question is whether real Vulcan behaviour is possible in a human context – since your original question was placed firmly in that context.
There has been some research on people who have lost emotional capacity through brain damage. A favourite hypothesis was that removing the emotional overlay would reveal logical processes that emotions masked or warped. Instead, there seemed to be nothing much underneath. The subjects stalled on basic familiar tasks, apparently unable to apply simple logic to small decisions. (That’s probably where the “expert consensus” comes from; sorry I don’t have any references to the research. I most probably came across it in Oliver Sacks’ books, and he’s likely to have been referencing other researchers/authors.)
Perhaps logical reasoning could be learned, to eventually replace emotional decisions just as smoothly and carelessly. There might be glitches, though. A “Red socks? Yuck!” reaction is appropriate for a corporate worker, but what is logically wrong with a flash of red at the ankles along with formal dress? For a Vulcan functioning in a human setting, quite a lot: emotion-based co-workers will label them “silly, low-status”. Other Vulcans would only note the colour as a minor visual distraction.
That suggests that small decisions may be more complicated than we acknowledge. At the core is an objective, but further related objectives and possible side-effects should be taken into account too. Maybe it’s all too processing-heavy for continuous logical thinking? Maybe the crude-but-effective emotional shortcut is the “logical” (at least pragmatic) way to avoid brain burnout?
Anyway, the current theory seems to be that emotional thinking is the human default, and rationalising the overlay, not vice versa. Not much hope for real Vulcans except in a non-human context.
>mjgeddes: what’s the real-world difference between a universe that has these platonic moral ideals built into it and one that does not?
If these platonic moral ideals don’t exist, it woud mean that:
(a) The universe is morally inconsistent – that is – there could never be a single consistent moral framework which all rational minds would agree with.
(b) There could be no explanation as to *why* decision theory works. The explanation as to why decision theory works cannot be supplied by decision theory itself. A full explanation would have to either explicitly or implicitly make references to teleological objective platonic entities on the multiverse level.
(c) In terms of practical behviour, you’d be on a slippery slope. You only need to look at the replies to E.Yudkowsky’s threads to see this. Without the existence of the objective moral platonics, its true that people who already *want* to be moral (in a strong consistent way) would not change their behaviour, but everyone else would. For starters, we know for a fact that 2% of the human population are psychopaths. Without the objective platonics, there would be no reason for these people not to simply do whatever they could get away with. And in fact, I’m not convinced that most humans actually have a strong consistent desire to be moral (I sure don’t!) Most people are not inherently evil, but they’re not inherently good either – I’d say that may be 10% of the population actually *want* to be moral, the other 90% basically act according to their feelings, which results in a mixture of good and bad behaviour (since the feelings we have are a result of evolution, which acts to maximize reproductive fitness, not to be moral).
Here’s quick ‘n dirty proof of objective morality:
Consider only those preferences associated with external consequences…ie interactions with the external world. I will show that this set of preferences cannot remain stable over the long run in the universe, and that even if there are multiple valid preferences to start with, only a *single* set of preferences eventually comes to dominate the entire multiverse.
Consider the multiverse (aka Tegmark and Barbour).
In at least *some* QM branches, at least *some* sentients survive. And in at least *some* of these QM branches, at least *some* sentients aim to continuously expand their sphere of influence. Consider only this sub-set of QM branches consisting of sentients who:
(2) Continuously expand their sphere of influence across the universe
Note that the preferences of any sentients who do not meet these conditions eventually die out or occupy a smaller and smaller proportion of the overall ‘volition’ of sentients as time advances. So it only the preferences of sentients meeting these two conditions that eventually come to dominate the entire universe in the future.
Now… suppose that there are multiple valid preferences to begin with among sentients meeting conditions (1) and (2). I will now show that condition (2) implies that eventually only a *single* unified set of preferences remains:
Condition (2) implies that eventually all the different types of sentient preferences associated with external actions must interact. Since we are considering only sentients that are continuously expanding their sphere of influence, eventually all sentient types interact in the limit of infinite expansion. Then either the different preferences *compete* with each other and eventually there must be a *single* set of preferences that wins, or the different preferences come to a *peaceful* accommodation, which also reduces to a new *single* set of unified meta-preferences (a peaceful accommodation means that compromises are made and so even with multiple micro-preferences to start with, a single set of meta-preferences has to emerge to handle the peaceful interactions).
So in both cases, (sentient competition) and (sentient cooperation) what starts as a valid set of multiple moral preferences eventually reduces to a single set of moral meta-preferences given the limits of inifinite time extrapolation.
To clinch the case for objective morality, we only need note that in the multiverse all moments in time *objectively exist* – the structure of the multiverse is fixed, platonic and objective. So the winning set of moral preferences that eventually comes to dominate the universe *is already fixed into the platonic structure of the multiverse*
Cosmic Variance is running a prediction contest for the 2008 presidential election, and contestants are asked to submit a probability distribution of the percentage of votes that Obama will get, not just a specific number.
Kind of interesting.
“The winner will be the prediction whose Gaussian distribution function has the largest value at the real fraction, whatever it turns out to be.” So there are two strategies. Some people have low and wide distributions, increasing the chances that the real fraction will be within their distribution, but lowering the value at that fraction. Others are trying to get it all in one pot shot by offering a high, narrow distribution.
Eliezer once wrote a short story criticizing the kind of “intelligence enhancement” procedure common in science fiction: if a simple procedure could enhance intelligence without severe, negative side effects, it would have already occurred as a natural mutation. Therefore, the main character of “Flowers for Algernon” should have been able to predict that the treatment would eventually fail and kill him.
I once read a short story – it was by Orson Scott Card – that proposes a vaguely plausible way to produce a smarter human brain that doesn’t violate this criterion. The concept is this: Much of the human brain is devoted to processing input from the eyes. If this brain matter could be turned into, say, more prefrontal cortex, you just might be able to make a person that is significantly better at abstract reasoning, at the cost of not being able to see. In most environments, blindness is a severely fitness-reducing handicap, but the modern world is capable of supporting blind people, and abstract reasoning capabilities are very highly valued… is this plausible enough for a story?
mjgeddes: the 3 differences you gave are differences between a universe in which people *believe there are platonic moral ideals* and a universe in which people *don’t believe in such ideals*. All three examples are perfectly consistent with the universe having no platonic moral ideals and people incorrectly believing that there are such ideals.
As to the proof, it fails if the universe expands faster than sentients expand, but more crucially, you don’t prove that people could not agree to differ and not come to a consensus, and most seriously, even if every sentient being did come to agree it wouldn’t prove anything about the fabric of the universe, only about the preferences of a bunch of sentient beings.
I think this huge, approaching 70 comment thread makes my point that the OB open thread should now be weekly, every Monday.
Also, do you all correspond at all with these folks?
The missions (and talent sets -mathematical modeling, etc.) seem to be highly overlapping.
For everyone who thinks that they might be poorly calibrated: Now, you can check your calibration levels by answering trivia questions. Trivia quizzes are posted at http://www.acceleratingfuture.com/tom/?p=129.
On entropy, time, etc:
The modern understanding of entropy is that it quantifies the number of macroscopically indistinguishable physical states, when they are described only by aggregated properties like temperature, rather than by an exact specification of where every particle is and what it’s doing. It’s calibrated so that the number of states goes up exponentially with the increase in entropy: if you double the entropy, the number of possible microstates is squared, and so on. The folk reason why entropy goes up is that higher entropy states are more numerous, they occupy a greater “volume” of the “space” of physical possibilities, and so if the state evolution over time of a physical system is regarded as a point wandering in that configuration space, it will naturally tend to pass into the greater volumes from the smaller volumes. The majority of physical trajectories will never leave the greatest volumes to begin with – i.e. they will stay at or near maximum entropy. It’s only if you have somehow ended up in a low entropy state that you will notice the second law in action.
The immediate explanation proposed for why time seems to run in the same direction everywhere, at least macroscopically, is indeed that the early universe started out in a low entropy state. The early universe was a hot homogeneous gas, which by most accounts is a high entropy state (there are a lot of possible ways of distributing particles through space that just look like a hot gas; those possible ways are the macroscopically indistinguishable microstates), but gravity makes a difference; for gravity, things being spread out is low entropy, and things being clumped together is higher entropy, with one enormous black hole being the highest entropy state of all. I believe the way to think about this is to think of mass as the macroscopic variable and a distribution of gravitons as the microscopic description. From a distance you see the gravity well of an object with a certain mass; but microscopically, that will be a complicated multi-graviton state, and I must suppose that the number of multi-graviton states which look macroscopically like the same gravity well goes up as mass increases. (The technically exact formulation of this is a work in progress, but you can see pieces of it in string theory and in black-hole thermodynamics.) So even though the gravitational clumping of the universe seems to decrease the nongravitational entropy, that’s made up for by the increase in gravitational entropy.
The next question is why the universe was in that low-entropy state to begin with. The answers proposed are (a) it just was (b) some unknown process or principle made it that way (c) the universe used to be high entropy but after a very long time it wandered into that low entropy zone. The third option is an interesting one, suggested by Boltzmann back when the universe was still conceived via Newtonian (billiard-ball) atomism, and revived by some people in today’s era of quantum gravity, when it looks possible that there was something before the “Big Bang singularity” after all. But there is a consideration that constantly haunts Boltzmann’s idea, namely, that if the low-entropy visible universe has its origin as a fluctuation in a normally high-entropy universe (such fluctuations should happen – the second law is a statistical law and there is a fluctuation theorem which says how often it gets violated), it is much more probable that it should produce, say, a single solar system in a void, rather than a Hubble volume with billions of galaxies; and, if we are trying to explain appearances, it is even more probable that such a fluctuation should produce a brain in a void, equipped with false memories and hallucinated experiences. (Such deluded intelligences, truly assembled by chance, not just in the evolutionary way, but produced whole and at once in an enormously improbable fluctuation, have recently been dubbed “Boltzmann brains”, in honor of Mr “S = k ln W” himself.) So physicists tend to prefer options (a) or (b), though I imagine option (c) will interest some of the anthropic thinkers on this blog.
A few more comments, mostly for La(u)ra.
Nick Tarleton said that if you knew the microphysical state exactly, the entropy would be zero. That’s only true of “Shannon entropy”, the generalization of “entropy” associated with observer uncertainty. If physical entropy is defined by way of macroscopic variables, and thus by dividing state space into smaller and larger volumes, it will still be nonzero even if you know the exact microscopic state, i.e. even if you know exactly where in a given subdivision the system state is located.
A second-law-violating entropy fluctuation will not produce time travel in any ordinary sense. What it produces is a local reversal of the arrow of time. Figuratively speaking, you might have a kitchen where someone is dropping plates on the floor, where they smash to pieces, and then on one occasion, one of the plates reassembles and flies into their hand, before the usual flow of events resumes. It’s a sort of time reversal, but only relative to everything else, and it’s not time travel.
Are antiparticles really just particles traveling backwards in time? That’s a hard question. Such a description is at least consistent with how they behave. The approach of an electron to a positron in space, followed by their mutual annihilation in a burst of photons, can indeed be conceived in terms of an electron traveling forward in time, then reversing its temporal direction in a burst of photons and moving away again, backwards in time. But it is also consistent with the idea that there are two types of entity involved, both moving forward in time, whose creation and destruction occurs in a coordinated fashion. You never have the net flow of energy violating temporal unidirectionality, for example; the energy of the photons resulting from electron-positron annihilation will be equal to the combined energy of the now-vanished particles. That might be considered a sign that there’s no zigzag in time here. Ultimately I think it’s a question that can’t be sensibly answered without getting into rather technical details, e.g. the “Euclidean” formulation of quantum field theory might incline one to say yes, the antiparticle really is a backward-in-time particle, but then Euclidean QFT is more mathematical than physical, you have to transform back (through a “Wick rotation”) to connect with reality, so it’s probably a bad idea to draw ontological conclusions from it. I have no firm conclusions to impart so I’ll save the musings for email.
Mitchell, great explanation. It sounds like you’re saying our best science favors (c). I admit I think the most likely explanation is that I’m a Boltzman brain (continuing down the rabbit hole, only the necessary parts of the “brain” at that) in a high entropy dust cloud.
Mitchell, you MUST get a blog, and blog regularly. When it comes to explaining this stuff, you’re a rich man’s Eliezer, in my opinion.
mitchell, there’s a 4th possibility you missed: d) there was some randomly triggered physical mechanism (e.g. decrease in the cosmological constant) that hugely increased maximum entropy. According to this theory, the universe started at near maximum entropy, but that maximum is much smaller than the maximum today.
I second Hopefully anonymous! Michell Porter- you are *really* good at explaining this stuff, and should at least blog if not publish a book and make some do.
Why no weekly Open Thread?
It seems to me this thread only gets comments now when I bump it back to first page visibility, like this.
Also, it’s now long and cumbersome, over the 50 comment fold.
Eliezer, your silence on this is notable. No new open thread for the week, no explanation for why there’s no new open thread for the week.
Still no comment, Eliezer?
Checking with: http://www.overcomingbias.com/open_thread/
…not very many Open Threads so far have gone “over the fold”.
It also seems that any petition about their frequency should probably be addressed to Robin Hanson.
HA, I’m going out on a limb with the suggestion that your last three comments are a self-refuting argument for more Open Threads. With this comment, the last five are now all meta-Open Thread discussion. I presume that you see the lack of comments as evidence of a lack of visibility, rather than a lack of interest.
To the last sentence, yes, and I think the evidence is on my side.
70+ comments in the first week, and no open thread comments in the second week?
I’d add that Eliezer’s non-engagement probably signals that within overcomingbias it would be hierarchically disaligning to post open thread comments in here.
At this point at least I get the benefit of amusement in regularly pointing this out. About three weeks of amusement for the low cost of tracking this thread, not a bad bargain!
The suggestion is interesting; Robin and I haven’t talked it over yet. The next obvious step would be to go to twice a month.
I’d like to lobby for straight to once per week, for the following reasons:
1. It’s low cost to you by any measure (including the webpage’s real estate).
2. I think people are more likely to come with their fresh ideas Monday (after a weekend of mulling) than the 1st and the 15th, or what have you.
3. It lowers the barrier to open thread entry. You rarely have more than 10 posts in a week, so there will almost always be an open thread on the “recent posts” list. This is important because I think ideas that can’t be put on your open thread may often be completely lost to the googleable universe. People out of politeness don’t post them to an inappropriate thread, and there’s no other equally low effort way for them to submit their idea.
A question regarding Boltzmann brains. Is it really so that a free-floating brain represents a more probable fluctuation in a high-entropy state, than a fluctuation into a Big Bang Initial State mess, which would then evolve as our universe has evolved according to our current theories, and eventually produce stellar evolution, biological evolution and intelligent life, all the while increasing in entropy on the whole?
Sure, a free-floating Boltzmann brain is a fraction of the mass of a Big Bang Initial State mess, but isn’t the free-floating brain also hugely more organized?
Which represents lower entropy? A small amount of mass in a hugely improbably and organized configuration, or a way larger amount of mass in a configuration that’s actually not so very organized and improbable in that sense?
It seems to me that a Big Bang Initial State mess is actually the smaller fluctuation in a high-entropy state.
The problem with actually regarding oneself as a Boltzmann brain is that the invisible true world of chaos should, sooner rather than later, burst into your low-entropy bubble and show up in your perceptions, if not destroy you. Every moment of continued orderly personal existence is more and more improbable on that model; but this is not true for a model in which the external world is actually orderly, as appearances suggest. If you truly have a cosmology in which most of your subjective duplicates are Boltzmann brains, it’s time to get a new cosmology.
Aleksei, in saying that a brain or an isolated world is more likely than a Hubble volume’s worth of galaxies, I was guided by 19th-century intuition, whose notion of the high-entropy universe is the homogeneous space-filling gas. The gravitational factor does confound things (because now it’s clumping which is high entropy), and I don’t personally know how to calculate gravitational entropy for anything other than a black hole. The closest thing to a standard contemporary framework for approaching this problem appears to be thermalized de Sitter space, which is an eternally expanding space with positive cosmological constant. There are two ways you can get an observer in de Sitter space: you can have a fluctuation big enough to create a region of inflation, within which galaxies and planets will then form in the usual way, or you can have a fluctuation big enough to create a Boltzmann brain. I cannot reproduce the calculation myself, but certainly enough people think that the latter outcome is favored for there now to be a minor subgenre of speculative cosmology which agonizes about how to escape that conclusion. As I said above, if it does turn out that the Boltzmann brains are more numerous in thermal de Sitter space, I would conclude that we are not living there.
HA and LF, I’m certainly thinking about life beyond the commentsphere.
“The problem with actually regarding oneself as a Boltzmann brain is that the invisible true world of chaos should, sooner rather than later, burst into your low-entropy bubble and show up in your perceptions, if not destroy you. Every moment of continued orderly personal existence is more and more improbable on that model; but this is not true for a model in which the external world is actually orderly, as appearances suggest.”
I don’t follow. I thought the Boltzmann brain in this questionthropic moment in configuration space fluctuates into existence and persists only long enough to ponder “am I a Boltzmann brain?”, with memories, ideas of scientists existing studying this, all part of the internal structure of that brain. Adding these other elements (the sooner or later stuff) seem to me to be cheating to make the Boltzmann brain less likely. But please explain to me why I’m wrong!
HA, it seems you are adopting the view that your personal history consists of a set of independently existing, instantaneous mind states connected by some form of consistency but not by causality; they possess a logical order by virtue of accumulation of memories, say, but physically they may be scattered all over the multiverse. If you believe that, then it’s true that persistence counts for nothing, because it’s just the illusion of persistence. I am assuming that the experience of persistence requires the actual persistence of something.
Josh: my approach is to find someone who is acknowledged as part of the ‘suspect’ group, but is seen as not hewing to its beliefs. This works on the theory that in this sort of subjective matter, the truth will be somewhere in the middle.
An example of what I mean. Take modern literature. It is in a similar situation as modern art (but not so bad). What I do is look at Harold Bloom – an acknowledged literature expert, who also is a great supporter of the classics/Western Canon and a disliker of much ‘modern’ & ‘post-modern’ stuff – says. If he says Cormac McCarthy is a great author, and the rest of the literary establishment is saying the same thing*, then I can guess I’m going to like _Blood Meridian_ (or at least acknowledge its quality).
Now, this obviously isn’t a good strategy in all areas (I certainly wouldn’t want to split the difference when it comes to evolution), but I find it works well enough in the humanities.
* This is actually a nice example because Bloom was saying good things about McCarthy long before this recent spate of movies and publicity.
(I’m enjoying Mitchell’s answers and the Boltzmann Brain discussion.)
It would be nice if I had a way to communicate with Z. M. Davis without that communication cluttering up this here blog. Will he contact me please?
“it seems you are adopting the view”, “If you believe that”.
I’m not trying to adopt views or believe things. I’m trying to understand what the heck is going on.
“your personal history consists of a set of independently existing, instantaneous mind states connected by some form of consistency but not by causality; they possess a logical order by virtue of accumulation of memories, say, but physically they may be scattered all over the multiverse.”
No, I thought the Boltzmann brain you initially mentioned could fluctuate into existence with memories (call them fake or instantaneous if you like) fluctuating into existence with it and inside of it. That’s not a statement about “instantaneous mind states connected by some form of consistency but not by causality … scattered all over the multiverse”.
I feel like in the last couple of posts you’ve been reaching to strawmanify the Boltzmann brain concept. I’m interested in the strongest version of the theory.
I think you’re answering a different question, namely, what to do if you think people are frauds. The difficult question is deciding if people are frauds. I’m can’t address it, either, but I want to address something else in josh’s question: he seems to be conflating several beliefs. The critics’ praise of Pollock should increase our belief that Pollock is good, but it should have very little effect on our ability to appreciate him.
If we’re BB, we’re instant toast (and all our memories are fake), so we shouldn’t worry about that possibility. But discarding cosmologies that predict BB pretty much throws them all out. Sometimes you just have to accept that your best current guess has serious problems.
Maybe the idea of dating the Open Threads should be abandoned – and they should simply be numbered instead – preferably in the subject line.
Douglas, I don’t follow the logic “If we’re BB then our memories are fake and we’ll instantly fluctuate out of existence, therefore we should believe that we’re not BB.”
I was recently contemplating Eliezer Yudkowsky’s conception of mindspace (where human minds are a very small portion of all possible minds), and its practical application in a future cybernetic/bioengineered/AI inundated society.
An unfortunate quality of the dialectic surrounding the future of intelligence is its almost single-minded focus on power. While, this is an understandable fixation considering the benefits of power, and its relation to the much awaited milestone of human-level AI, it overlooks the lessons that both nature and history have to teach: evolution teaches us that it’s not the strongest/biggest/fastest/(insert here)est that survives, but the best adapted to a specific niche, and all the successful practical applications in AI have been in very specific niches.
We are on the brink of an intelligence explosion, but not one where only processing power will take center stage. The coming paradigm shift will instead be analogous to the Cambrian explosion, where diversity joins power.
The mastery of intelligence will allow its artificers to shape it to fit any need or want. But what needs, what wants? How will people adapt? How will society adapt? Will society adapt?
Since my knowledge of sociology and psychology is limited, I will leave this seed of a thought to grow in more fertile minds.
We shouldn’t waste resources worrying about the possibility that we’ll be dead in a minute, because there’s nothing we can do about it. We (I, really) won’t “fluctuate” out of existence, but instead would be brains in vacuum, with no ability to manipulate the environment and no negentropy to exploit. Not brains, really, something even smaller, but minimally capable of supporting an experience.
HA, I had to hypothesize about your assumptions to motivate what you were saying. Your life and memories do not just consist of the few moments during which you wondered “am I a Boltzmann brain?”, they stretch long before and long after that moment, and so a theory of what you are that only accounts for that moment (and a few others on either side of it) is simply not tenable, unless you have some odd view such as the one I speculatively attributed to you.
But enough. I am going to do something crude and ostentatious and announce a six-month vow of silence, specifically with respect to commenting here, and starting right now. I’ll break it if absolutely necessary, but meanwhile I’ll rely on the good old social emotions to make me think twice about breaking it. The basic reason is that despite all the intelligence gathered here, the final court of appeal in all matters is invariably some form of mathematical naturalism, and since that cannot be the final word on the nature of reality, that establishes a limit to how deep the discussions will ever go. Whenever “metaphysical” concepts like time, consciousness, existence come up, the instinct is to define them in terms which will allow questions about them to be answered by existing bodies of theory, and in general that requires that they be defined away, or “ontologically flattened” as I sometimes put it. Of course such an approach is not unique to this blog; it’s symptomatic of where scientific culture in general is at. We know how to think rigorously about some things, we don’t know how to think rigorously about others, and so the second sort of thing is either shoehorned into the form of the first, or it’s denied away entirely. The path forward must involve returning to raw experience and rethinking the mathematical-physical ontology from the ground up, in a way that denies nothing that’s actually there. So, goodbye until January 2009, I guess!
I thought this would be a good place to ask this. Does anybody know of a named problem or class of problems where: 1) There is a group of people, each of whom has the ability to, for example, electrically shock the rest of the group without shocking themselves. 2) If nobody presses their shock button, nobody gets shocked. However, 3) Somebody presses their button, 4) prompting others to endlessly retaliate in order to shock the others into NOT pressing their buttons. And each time someone shocks the group, someone else feels they have to have the last word. If the group could simultaneously cooperate, everyone would be fine.
Please, I’m not asking for a solution to the problem, reasons why it isn’t a coherent problem, or any other musing on the topic. I’m simply asking if anybody here can point me to a known problem that involves this kind of simulataneous cooperation vs. having the last word. Thanks!
Extraordinarily good writing and willingness to use simple equations (something rarely seen in American papers) in this piece in an indian newspaper on the genetics of altruism:
Robin and Eliezer,
It seems like you’re keeping the open thread beyond an inconvenience barrier for 3 out of 4 weeks?
Fart spray effects moral judgment.
Talk about strange bias! In reviewing the medical topics, I came across this post yesterday. Thinking it under-discussed, and under-documented, and since it was in my sub-specialty professional field, I spent three hours putting together a well-documented commentary, hoping to generate more discussion. I’m new to blogs; a participant only since I retired. Who knew there was such a thing as *too much documentation*? Since I learned only recently how to make a hyperlink in a comment, I painstakingly constructed a link to references for each point discussed, as evidence-based writing is want to do. Well, apparently the TypePad software counts the number of hyperlinks in a comment, and if beyond some unmentioned number, screens out the comment as spam. My 3 hours of work is gone, and I have no copy. I can’t even re-post it in my own blog. What’s an old sex specialist to do?
… be a charity angel.