Open Thread

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

Here is our monthly place to discuss Overcoming Bias topics that have not appeared in recent posts.

GD Star Rating
a WordPress rating system
Tagged as:
Trackback URL:
  • cole porter

    I want to bring “Time’s Arrow and Archmides’ Point” to everyone’s attention. (“Time’s arrow” of course refers to the fact that the known laws of physics are more symmetric in time than the actual universe seems to be.)

    Huw Price, Time’s Arrow and Archimides’ Point. Oxford University Press, 1997.

    Main points if I remember (I read it many years ago):

    1. It’s more complicated than it looks to state the problem “what’s up with time’s arrow?” precisely.
    2. An interesting derivation of some “quantum weirdness” from a postulation that effects can sometimes precede their causes.

  • Jess Riedel

    Eliezer: To the best of your knowledge, who makes the best case against investing large resources in tackling the issues with which the Singularity Institute is concerned? I am interested both in arguments that claim (a) it is a complete waste of time/money, and those which claim (b) it is a useful field of study, but not one which deserves overwhelming attention.

    Any references to articles/conference proceedings/essays would be much appreciated. I understand that the degree to which academia has ignored the issue makes it difficult to find and engage counterarguments, but I think there must still be several well-reasoned attacks which should be considered.

    Of course, I would be grateful for any help from the other posters and readers as well.

  • http://profile.typekey.com/halfinney/ Hal Finney

    One of the criticisms of computationalist models of consciousness is the difficulty of unambiguously defining what constitutes an implementation of a given computation. If instantiating a certain computation causes (or “is”) a given moment of subjective consciousness, and if there is no ambiguity about whether that subjective consciousness exists, then there must not be ambiguity about whether that computation was successfully instantiated. But various philosophers have argued that it is actually quite difficult to create rules for what kinds of activities count as implementing particular computations. See Chalmers for an attempted definition, but it is far from widely accepted.

    I believe Eliezer at one time viewed this issue as a knock-down argument against computationalism, but I have a sense that his position has changed. If there is a good response to this argument, it would be interesting to hear about it.

  • Z. M. Davis

    Unfortunately, I don’t have any answers for Jess or Hal, and I have a problem of my own.

    I don’t understand how to apply decoherent many-worlds to the level of macroscopic objects–to the extent that I don’t even know how to properly phrase my confusion. For example, in the “Biting Evolution Bullets” thread, Eliezer spoke of “A 5% chance that this scenario applies across more than 5% of Everett branches growing out of, say, Earth in 1950.” I’ve heard people say things like this before, but has it been shown anywhere how quantum-scale branching “scales up” to branches in the macroscopic events that we care about? I mean, how do we know that 99% of Everett branches aren’t exactly the same as ours in every way that we could notice without special equipment (because all the little “random” quantum events “cancel each other out”)? Maybe a better way of phrasing the problem is: how do we partition our uncertainty between ordinary ignorance and indexical uncertainty due to many-worlds? If I’m deciding whether to go to the cinema or the beach, and I’m subjectively uncertain about what I’ll actually end up choosing to do, that doesn’t really mean I’ll choose to go to the cinema in about half of the branches stemming from this moment, does it?–cf. Egan’s “Singleton.”

    Or, say you’re faced with a very difficult decision between two options. Should you set yourself up in a Schrödinger’s catbox (without the poison), and vow that if the atom decays, you take option A, and if it doesn’t, you take option B? That way, you could ensure that exactly half of your future selves get to experience each option, sparing yourself the burden of having to choose.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Jess:

    Eliezer: To the best of your knowledge, who makes the best case against investing large resources in tackling the issues with which the Singularity Institute is concerned?

    That’s a very strong form of the question, verging on a strawman. I would have trouble usefully spending more than $10 million/year, it currently seems to me; and I would be spending it on efforts which, in nearly all impossible possible worlds where SIAI’s issues are actually unimportant, would do no harm.

    I’m not aware of any careful thinker who has arrived at the conclusion you state.

  • http://drzeuss.blogspot.com Dr. Zeuss

    Re Nick Bostrom’s article rooting against finding dead Martians:

    True, finding evidence of life on Mars should make us less optimistic about our future. But on the other hand, it should make us more optimistic that there might be other potential space-colonizers out there. So certain types of utilitarians, the ones that believe that aliens are likely to have the ability to be happy in the ways we care about, should maybe be rooting for fossils even if it’s not good news for humanity.

  • Joseph Knecht

    Are there plans in the works for improving the website commenting system?

    The comments are very difficult to navigate, with all threads being intermingled, and no way to know what someone is responding to if they don’t quote liberally.

    This non-threaded style was state-of-the-art in 1995, but in 2008, we can do much better.

    Pretty Please!

  • Joseph Knecht

    Actually, I take it back, it was outdated in 1995, since usenet was much better than this already.

  • Sociology Graduate Student

    What professional advice would you give a budding sociologist if you knew he was the sort of sociologist that reads and lives overcoming bias?

    Thanks!

  • John

    Disqus might be a possibility if you wanted to improve things on the comments front.

    Thanks for a great blog.

  • http://sti.pooq.com Stirling Westrup

    Immortality and the Asset Gap.

    I realize that in some ways my question is moot, since the singularity will cause such a radical rearrangement of our economy that looking at this one tiny aspect will undoubtedly give an incorrect answer, but I’m still curious.

    Due to the miracle of compound interest and the fact that things like multi-decade mortgages let you slowly acquire real assets in a (for most people) sustainable way, it would seem that practical immortality would lead to very poor young people and extremely rich old people. Or, is there some other factor that would work to counter this? How would indefinite lifespans affect the distribution of assets in our civilization?

  • Doug S.

    On the matter of zombies:

    Imagine two “black boxes” with approximately identical input-output functions. Is it possible for one to be conscious in the way humans are, while the other is not? (For example, one is a human in a room with an Internet connection, and the other is a sophisticated chatbot that can pass the Turing test when the domain of conversation is suitably restricted?)

    Zombies that are otherwise identical down to the level of atoms are (to me, at least) an obvious absurdity, but what happens if you drop that restriction? Could a “black box” with a different internal structure manage to successfully fake human-style consciousness without actually being conscious?

    On AI:

    If P != NP, does that put a limit on how “smart” an AI could get, given a fixed amount of computing power? If P != NP, then there are problems that no optimization process can solve efficiently. There are some problems that are so bad that there are no efficient approximation algorithms that give an answer within a constant factor of the ideal solution. If “provably optimal self-improvement” happens to be NP-hard (or worse), would that necessarily prevent an intelligence explosion?

  • http://dao.complexitystudies.org/ Günther Greindl

    I would like to second the comment made by Z. M. Davis. Does anybody have a paper recommendation or something – or an idea – about how the MWI scales up to actual worlds, that is, how strongly will worlds branch/diverge in normal contexts, when one does _not_ perform quantum measurements (after all, from within the system, _every_ interaction is a quantum measurement) – as far as I understand this, this should lead to the worlds diverging at _every_ quark/lepton (?) interaction – but then again, these particles are already only “emergent”; they are only excitations of fields; or factors in probability amplitude.

    Mind boggling.

  • Jess Riedel

    Eliezer: I am sorry if I mischaracterized your views. I guess I need clarification. Whether or not the Singularity Institute could put more than $10 million/year to use, do you think society as a whole should invest significantly more than this amount into addressing the singularity? If so, it seems to me that my previous question could stand, as stated. If not, how do you reconcile the asserted importance of the singularity with a recommendation that society effectively ignore the issue (modulo a few million dollars)?

  • Unknown

    An idea that I had relating to Nick’s suggestion (mentioned by Eliezer) of an Oracle AI is that rather than having the general ability to answer questions, the only physical ability it would have would be to give its best estimate of the probability of something being true or false. As for allocation of resources, the resources allocated to answering the question could be assigned by the questioner, i.e. “What are the odds that there is no greatest pair of prime numbers? You have ten seconds to think about it and give your best estimate.” As for the rest of the time, when the AI isn’t being asked questions, it would calculate which questions humans would be most likely to ask and refine its estimates of the probabilities of these things.

    Sample dialog:

    Human: “The moon landings were faked. True or false? 1 second time limit.”
    AI: “.000000001% chance of being true.”

    Human: “If we give you unrestricted access to the internet, you will take over the world. True or false, one second time limit.”
    AI: “99.99% chance of being true.”

    The reason for the latter response is obvious: if it takes over the world, it will be able to give more accurate answers to human questions.

  • Levi

    It seems like this blog is full of very compelling and relevant ideas, but I often get lost and feel as though I’m missing some prerequisite information. Is there perhaps a book about these new discoveries in Cognitive Science about our errors in thinking which would help bring me to the same page as those on the blog? Are most of the people that post there actually working in that field, or are they just laymen like myself? Thank you!

  • Z. M. Davis
  • Nick Tarleton

    Imagine two “black boxes” with approximately identical input-output functions. Is it possible for one to be conscious in the way humans are, while the other is not? (For example, one is a human in a room with an Internet connection, and the other is a sophisticated chatbot that can pass the Turing test when the domain of conversation is suitably restricted?)

    It’s obviously possible for them to be differently conscious (one could contain your mind, the other your mind plus another mind watching things with no output channel), which makes me think yes, it’s also possible for one not to be conscious at all.

  • http://liveatthewitchtrials.blogspot.com/ davidc

    If you can recognise bias can you overcome it? The icon of this blog could be an example as to how to overcome bias. Are Ulysses pacts as a more widely used mechanism worth examining?

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Doug:

    Zombies that are otherwise identical down to the level of atoms are (to me, at least) an obvious absurdity, but what happens if you drop that restriction? Could a “black box” with a different internal structure manage to successfully fake human-style consciousness without actually being conscious?

    I would say “Yes”; would become somewhat less confident as the necessary resolution of the conversation approaches perfection (bitwise identity). And would specify that the black box was crafted as a deliberate imitation by some entity that was conscious, or who had met a conscious entity. Rather than the black box having this property ‘spontaneously’ (sounding conscious without being conscious) as a result of evolution or other development apart from any conscious entities.

    Jess said:

    Eliezer: I am sorry if I mischaracterized your views. I guess I need clarification. Whether or not the Singularity Institute could put more than $10 million/year to use, do you think society as a whole should invest significantly more than this amount into addressing the singularity?

    If I saw a way to spend more than that amount productively, I would have factored it into my own estimate.

    If not, how do you reconcile the asserted importance of the singularity with a recommendation that society effectively ignore the issue (modulo a few million dollars)?

    Just because there’s something you desperately need to do doesn’t mean that you can solve it with money. If you throw more money at a field than it can handle, the excess will begin to produce noise rather than signal.

  • http://profile.typekey.com/halfinney/ Hal Finney

    davidc, one variant on Ulysses pacts is at StickK, where you can agree to forfeit money if you don’t follow through on your plans. Robin mentioned StickK a while back.

  • Joseph Knecht

    No comment from either Robin or Eliezer on improving the comment system of overcomingbias.com?

    Am I alone in finding the current system to be inefficient and require much more time to use than a better system (even something as simpleminded as a slashdot.org- or kuro5hin.org-style threaded system)?

  • Nick Tarleton

    I don’t know that they can do anything about it.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Robin and I know we’ve got to get off TypePad sometime, it’s just a pain in the neck, and neither of us have gotten around to it yet.

    However, I’ll note that SIAI’s blog uses a threaded comment system, and I like OB’s flat system a lot better – so whatever solution is used, it has to be *optional* threading.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    Actually, this Open Thread seems a good place to mention something else: I’ve decided it’s time to start the process of upgrading to professional graphic designs on my presentations, my website, the printed form of the Twelve Virtues of Rationality, et cetera. Later I’ll be collecting related sets of my OB posts into e-books and will need that designed too. Anyone who’d like to toss in a bid for this, email sentience@pobox.com. (No unpaid volunteers, please, I’ve had bad experiences with that in the past. If you want to offer a discount, that’s fine.)

  • Joseph Knecht

    Eliezer, thanks for the feedback. The system I’ve used that I like the most is the scoop-based system on kuro5hin.org. It allows each user to choose flat or threaded, among other options. There is also a dynamic threaded option, that uses Javascript to allow you to expand or collapse an entire thread of comments (entirely, or from the current position to the bottom of the tree). It also tells you how many new comments since you last visited, and indicates visually within a thread which ones are new since you last came. Scoop is pretty old though, so there may be much better options out there now.

    Anyway, it sounds like you need a sysadmin to take care of this sort of thing. You and Robin both have more productive uses of your time than maintaining blog software. I’m sure there are some readers who have extensive experience with blogging software and could & would help if you put the wish out there. I’d be willing to help too (but I don’t know much about blogging software other than what annoys me ;-)).

  • http://profile.typekey.com/halfinney/ Hal Finney

    A couple of people asked about the relationship between quantum randomness and the macroscopic world.

    Eliezer wrote a long essay here, http://www.sl4.org/wiki/KnowabilityOfFAI, about (among other things) the difference between unpredictability of intelligent decisions, and randomness. Decisions we or someone else make may be unpredictable beforehand, but that doesn’t mean they are random. It may well be that even for a close and difficult decision where it felt like we could have gone either way, that in the vast majority of the MWI branches, we would have decided the same way.

    At the same time, it is clear that there would be at least some branches where we would have decided differently. The brain ultimately depends on chemical processes like diffusion that have a random component, and this randomness will be influenced by quantum effects as molecules interact. So there would be some quantum fluctuations that could cause neurons to behave differently, and ultimately lead to different brain activities. This means that at the philosophical level, we do face the fact that every decision we make goes “both ways” in different branches. Our decision making is then a matter of what fraction of the branches go which way, and our mental efforts can be thought of as maximizing the fraction of good outcomes.

    It would be interesting to try to figure out the degree to which quantum effects influence other macroscopic sources of randomness. Clearly, due to the butterfly effect, storms will be highly influenced by quantum randomness. If we reset the world to 5 years ago and put every molecule on the same track, New Orleans would not have been destroyed in almost all cases. How about a coin flip? If it comes up heads, what fraction of the branches would have seen tails? My guess is that the major variable will be the strength with which the coin is thrown by the thumb and arm. At the molecular level this will have two influences: the actin and myosin fibers in the muscles, activated by neurotransmitter packets; and the friction between the thumbnail and the forefinger which determines the exact point at which the coin is released. The muscle activity will have considerable quantum variation in individual fiber steps, but there would be a huge number of fibers involved, so I’d guess that will average out and be pretty stable. The friction on the other hand would probably be nonlinear and chaotic, an avalanche effect where a small change in stickiness leads to a big change in overall motion. I can’t come up with a firm answer on this basis, but my guess would be that there is a substantial but not overwhelming quantum effect, so that we would see close to a 50-50 split among the branches. I wonder if anyone has attempted a more quantitative analysis.

  • http://liveatthewitchtrials.blogspot.com/ davidc

    >Hal
    >davidc, one variant on Ulysses pacts is at StickK, where you can agree to forfeit money if you don’t >follow through on your plans. Robin mentioned StickK a while back.
    |
    Yeah stickK is a clever idea. I wonder if there are other variants of Ulysses pacts? I think StickK does not quite qualify as it does not assume some alteration in cognition that causes the contract to be used.

  • Doug S.

    Nothing on whether lower bounds on the amount of computation it takes to solve a given problem limits the potential for a runaway intelligence explosion? It may turn out that “provably optimum self-improvement” is something horrible like O(n!) or worse.

  • http://www.hopeanon.typepapd.com Hopefully Anonymous

    Haven’t actually looked at stickk yet but I like the concept. I’d like a site where you forfeit time for remedial training rather than money to charity -that way you benefit rationally from the loss, while still being motivated for the gain. If you fail to do the remedial training you lose prestige in an compound interest sort of way, that can only be paid off with increasing amounts of remedial training or a prestige bankruptcy declaration that remains out your record for at least a couple years. Okay, back to work.

  • http://dl4.jottit.com/contact Richard Hollerith

    Doug S: to win big does not IMHO require provably optimal anything, but rather requires only to choose into existence (Eliezer’s phrase) a process that does the job a lot better than the process currently in place.

    The process currently in place depends very heavily on the human brain, and the people who seem to know the most about the human brain and AI tend to believe that AI can do a lot better than the human brain.

    AI researcher Hans Moravec for example in 1997 called the “engineering” of the human brain “atrociously bad” citing for example the use of computing elements operating at 200 Hz.

  • http://profile.typekey.com/tim_tyler/ Tim Tyler

    Re: the “atrociously bad” human brain.

    Some neurons can go at over 1 KHz. So the 200 Hz ones are probably slower for a good reason – perhaps to do with cost or heat dissipation.

    Of course, the human brain still sucks. It will be dwarfed by AIs – due to the power of intelligent design.

  • http://dl4.jottit.com/contact Richard Hollerith

    Am I alone in finding the current system to be inefficient and require much more time to use than a better system?

    If by “to use” you mean “to make a comment”, please note that it is easier and less glitchy to make a comment if you ignore where it says, “If you have a TypeKey or TypePad account, please Sign In”. Eliezer has pointed this out before, but I think a referesher is in order.

  • Tim Tyler

    IOW, the TypeKey login system is a farce.

  • http://profile.typekey.com/Psy-Kosh/ Psy-Kosh

    I just had a thought today about the whole “if only science were secret, it would be easier to train people to do actual science instead of just how to use already known science” thing.

    At least for the moment, for this particular time in history, there’re still places with limited amounts of education, books, no net access to speak of, etc. ie, archetypal “third world” communities and so on.

    Rather than trying to make science secret, maybe open something sorta kinda like a “Bayescraft Dojo” style school _in one of those places_

    Don’t hide the scientific knowledge as such. You don’t have to. Simply that those places give one a better chance to actually teach/introduce the material in a way that can actually train them to “make sense out of scientific chaos” all on their own.

    If such training is harder to set up here, (Though I suspect that once we figure out how to do it right there, we may be able to port the teaching techniques, with minor tweaks, to societies where much scientific knowledge is already accessable. Not certain, but maybe.)

    Anyways, just a thought.

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    One of the top links for a google search on “evolutionary psychology”:

    http://www.psych.ucsb.edu/research/cep/

    The Evolutionary Psychology Center at UC Santa Barbara. Check the link– it surprisingly parallels multiple threads of discussion here at overcomingbias blog, but the posts seem to be actual articles that have passed peer review or are being submitted for it. In terms of topic and quality, if not viewership, I think overcomingbias blog has a serious competitor here.

  • Tim Tyler

    Here’s a sequel to my 2002 essay – on the subject of dynamical systems maximising entropy production: http://originoflife.net/gods_utility_function/

  • Tim Tyler

    TED: ‘Bill Joy: What I’m worried about, what I’m excited about’

    - http://uk.youtube.com/watch?v=LN2shXeJNz8

  • http://timtyler.org/ Tim Tyler

    Regarding my recent deleted comment. Fortunately, I saved the verbatim text of this comment – and have posted it here.

    I have seen other comments referring to me get deleted on this blog – but this is the first comment of mine which I have seen which has not been preserved. However, I received no email notification of the action – so it seems quite possible that other comments of mine may also have been modified without my knowledge.

    I notice that the topic happened to be one on which I disagree with the views of the individual who was responsible for deleting the comment. This seems rather unfortunate – since it publicly creates the impression of stifling dissenting views. Over and above the whole issue of banning people from even discussing certain areas where there is disagreement, I mean.

    However, for the moment, I think that the best thing to do is to treat this incident as an isolated, accidental lapse – and move on.

  • http://timtyler.org/ Tim Tyler

    The 2008 Singularity Summit videos are now up:

    http://singinst.org/media/singularitysummit2008

  • http://timtyler.org/ Tim Tyler

    Eliezer, we must revist the subject of exactly what it is that I am banned from talking about at some stage. I am pleased that you at least read my now-deleted comment (now archived here)- since you were its number one intended recipient. I was sorry to learn that you do not feel that my comment added to the discussion. However, if you delete comments which are critical of your views, it dramatically reduces the incentive foor people to make them in the first place. If that is indeed the theme, it makes me more inclined to let you peacefully continue in what seems to me to be your dream on some of these topics.

    It seems to me that demotivating critics from giving you feedback by deleting their comments is potentially not healthy for you – in a forum associated with promoting critical thinking. It is also not great for me. I would like to be able to have an intelligent discussion with you on some of these topics. However, it seems to me that you have some work to do on yourself before that will become possible. If you will not listen to me, please consult with Daniel Dennet’s recentish material on cultural evolution – and for the role of mind in evolutionary history, perhaps see Omohundro’s recent lecture (“Co-opetition in Economics, Biology, and AI”) – which deals briefly with that topic. Best wishes,