Bits Of Secrets

“It’s classified. I could tell you, but then I’d have to kill you.” Top Gun, 1986

Today, secrets are lumpy. You might know some info that would help you persuade someone of something, but reasonably fear that if you told them, they’d tell others, change their opinion on something else, or perhaps just get discouraged. Today, you can’t just tell them one implication of your secret. In the future, however, the ability to copy and erase minds (as in am em scenario) might make secrets much less lumpy – you could tell someone just one implication of a secret.

For example, what if you wanted to convince an associate that they should not go to a certain party. Your reason is that one of their exes will attend the party. But if you told them that directly, they would then know that this ex is in town, is friendly with the party host, etc. You might just tell them to trust you, but what if they don’t?

Imagine you could just say to your associate “I could tell you why you shouldn’t go to the party, but then I’d have to kill you,” and they could reply “Prove it.” Both of your minds would then be copied and placed together into an isolated “box,” perhaps with access to some public or other info sources. Inside the box the copy of you would explain your reasons to the copy of them. When the conversation was done, the entire box would be erased, and the original two of you would just hear a single bit answer, “yes” or “no,” chosen by the copy of your associate.

Now, as usual, there are some complications. For example, the fact that you suggested using the box, as opposed to just revealing your secrets, could be a useful clue to them, as could the fact that you were willing to spend resources to use the box. If you requested access to unusual sources while in the box, that might give further clues.

If you let the box return more detail about their degree of confidence in their conclusion, or about how long the conversation took, your associate might use some of those extra bits to encode more of your secrets. And if the info sources accessed by those in the box used simple cacheing, outsiders might see which sources were easier to access afterward, and use that to infer which sources had been accessed from in the box, which might encode more relevant info. So you’d probably want to be careful to run the box for a standard time period, with unobservable access to standard wide sources, and to return only a one bit conclusion.

Inside the box, you might just reveal that you had committed in some way to hurt your associate if they didn’t return the answer you wanted. To avoid this problem, it might be usual practice to have an independent (and hard to hurt) judge also join you in the box, with the power to make the box return “void” if they suspected such threats were being made. To reduce the cost of using these boxes, you might have prediction markets on what such boxes would return if made, but only actually make them a small percentage of the time.

There may be further complications I haven’t thought of, but at the moment I’m more interested in how this ability might be used. In the world around you, who would be tempted to prove what this way?

For example, would you prove to work associates that your proposed compromise is politically sound, without revealing your private political info about who would support or oppose it? Prove to investigators that you do not hold stolen items by letting them look through your private stores? Prove to a date you’ve been thinking lots about them, by letting them watch a video of your recent activities? Prove to a jury of voters that you really just want to help them, by letting them watch all of your activities for the last few months? What else?

In general, this would seem to better enable self-deception. You could actually not know things anywhere in your head, but still act on them when they mattered a lot.

GD Star Rating
loading...
Tagged as: , , , ,
Trackback URL:
  • Jonathan Graehl

    Both parties trust the box implementer to not leak info about the uploaded minds, and the friend trusts the box to output the true decision. This suggestssome kind of trusted and/or zero-knowledge-provable computation (which raises computational expense unless you’re willing to trust a third party to run the box).

    Also, I’m unclear on what you mean by “prediction markets on what such boxes would return if made” – who’s going to bet against you, when they don’t have access to your secret info? Seems like a baseline feature based on your track record (how often you were found to lose a box-trial) would be as informative as any market (unless your idea is that some market participants share the secrets that inspire your box-promise).

  • http://www.facebook.com/profile.php?id=5740246 Matthew Graves

    “To reduce the cost of using these boxes, you might have prediction
    markets on what such boxes would return if made, but only actually make
    them a small percentage of the time.”

    That doesn’t sound like a liquid market to me, and this mucks with the incentive structure of the copy being told the secret.

    Imagine no external participants, and so the ‘market’ is more of a ‘bet’. “You shouldn’t go to this party, and if our copies discuss it, I win $10 from you if your copy says ‘one’ and you win $50 from me if your copy says ‘zero.'” Now, the copy is not transmitting simply whether or not they’ve been convinced, but whether or not believing the info is worth the $60 swing. More info has leaked out with the size of the bet- but, also, the secret revealer is likely to be compensated for revealing the info, and particularly so in the cases where the info is surprising to the listener.

  • Robin Hanson

    Matthew and Jonathan, the case where the secret holder just offers to bet for a favorable outcome, and no one else takes them up on that offer, counts as a use of a prediction market. By using a market one allows more people to come in on either side, but having no one actually come in isn’t a problem.

  • http://www.gwern.net/ gwern

    Open questions: to what extent does the boxing procedure improve over crypto zero-knowledge proofs? What can the boxing do that ZKPs cannot and vice-versa? Is there any useful hybrid where boxing+ZKPs can do something interesting that cannot be done by either alone?

  • Doug

    “Today, you can’t just tell them one implication of your secret.”

    Techniques to only reveal certain implications of secrets already do exist today. They’re called zero-knowledge proofs.

    http://en.wikipedia.org/wiki/Zero-knowledge_proof

    • Robin Hanson

      I think a zero-knowledge proof would usually require far more formalization of what people know than they are usually capable of.

      • Doug

        Actually thinking about it one could setup a zero-knowledge proof for your scenario with presently available technology. No need to wait for Ems.

        Peggy wants to convince Victor that he shouldn’t go to the party for his own good. Victor suspects that Peggy is lying and has ulterior selfish motives.

        Current brain scan technology exhibits very high reliability at detecting lies. So Victor could simply put Peggy in his brain scanner and ask her if he really shouldn’t go for his own good.

        But Peggy doesn’t want to be subjected to Victor’s scanner because he could use it to find out other private information from Peggy’s brain. On the flipside Victor can’t trust the results from Peggy’s scanner because she could have rigged it to print out a previous scan done when she was telling a mundane truth.

        At this point we can use another presently available brain scan technology: the ability to tell what a person is looking at from their brain scan (currently used to observe people’s dreams).

        Victor lets Peggy use her brain scanner, but he asks her the question and at the same time flashes a randomly chosen image. Peggy scans her brain, then uses Victor’s public key to sign the scan. She then passes the signed scan to an cryptographic isomorphic that maps brain scan to a two bit output. Bit one is whether the brain is in a lying state or not, bit two is whether the brain was observing the image that Victor chose.

        Peggy can pass the signed results from the isomorphic function and prove to Victor that she was telling at the truth and that it must have been at the time she was observing the secret image Victor chose.

        Peggy protects her neural privacy and Victor is assured of Peggy’s honesty.

      • http://overcomingbias.com RobinHanson

        I agree that reliable lie detectors would also allow a lot of interesting applications, but I’m skeptical they would ever get so reliable as to be a full substitute for the boxes I discuss here. Humans are too good at self-deception.

  • Robert_Easton

    I don’t think I want to create clones of myself with full intention of killing them soon after. I know from experience of being me that they will not like that.

    • http://profiles.google.com/zwilliams8377 Zachary Williams

      Ditto. What is the incentive of the EMs to provide trustworthy information? I suppose we could provide them with a few cycles of stimulating their pleasure centers or whatnot, but I’m not sure if I would be particularly happy about the whole deal, as an EM. Of course Kiln People addresses this whole question pretty interestingly.

      • Daniel Carrier

        They presumably care about the originals. I doubt they’d be doing this if they didn’t consider each other the same person.

      • http://profiles.google.com/zwilliams8377 Zachary Williams

        I can see this working if their memories (sans the bits that need to be censored for the scheme to work) were incorporated into the original self, but that sounds hard, and, just trying to imagine if I woke up one day as a temporary EM, I have a hard time imagining myself feeling particularly positive towards my original -he doesn’t seem to care about me, why should I care about him? And of course if I’m going to be destroyed soon anyway why not sabotage everything just for the hell of it? I won’t face the consequences.

      • Daniel Carrier

        If someone threatened to erase the last minute of your memories, would you accuse them of attempting to kill you? What if they did it perfectly, and restored your mind to exactly how it was a minute ago? What if they made your mind do the same calculations, but in parallel?

      • http://profiles.google.com/zwilliams8377 Zachary Williams

        You know, I don’t have a good response to that. I feel that the analogy isn’t perfect, but I don’t have the time or energy to analyze it in any depth, so, point taken. It will be digested over time, and perhaps accepted.

      • IMASBA

        Memories aren’t a person, just like a stats sheet or a screenshot of a video game isn’t a video game. A person can interpret memories, act on them and make new memories. When you destroy an EM you are destroying the entire machinery of personhood. From the perspective of the EM you are really killing him. Maybe you don’t see it that way, but that’s irrelevant, you’re not god, you don’t get to decide over an other person’s life. The EM is a sentient being, if he requests to not be destroyed that request has to be granted. The only way around this is to lift the prohibition of murder (which means anyone can legally kill you too) or to have some law that discriminates artificial persons from biological persons (which is ethically disgusting).

      • Daniel Carrier

        That doesn’t seem to address the point. If you perfectly erase a person’s memories, it’s the same as deleting them and restarting from a backup, which is itself the same as just branching and quickly deleting. At what point does it become killing, and why?

        I should add that that does gloss over one possibility. It could be that going from imperfect deletion to perfect deletion is the point that they’re killed.

        “The EM is a sentient being, if he requests to not be destroyed that request has to be granted.”

        That issue comes up more in other stuff that Robin’s written, but not really here. Nobody will volunteer if they’re not okay with what may or may not be death. If you’re going to be duplicated a bunch until your knowledge isn’t useful, then most of you is destroyed, you have quite some time to change your mind. If someone’s just going to explain it to you, it doesn’t matter so much.

        “The only way around this is to lift the prohibition of murder (which means anyone can legally kill you too) or to have some law that discriminates artificial persons from biological persons (which is ethically disgusting).”

        It’s a question of defining a person. If you consider all branches to be the same person, or all branches less than x apart, then it’s not murder to prune the tree. It’s only murder to destroy it entirely. If there’s only one instance left, it’s not murder.

        Personally, I don’t consider any of them the same person. That is, you now and you two seconds from now are two different people. Death doesn’t mean much from that perspective. It’s just like someone standing at the end of a line, except it’s temporal instead of lateral.

  • Pingback: Recomendaciones | intelib

  • IMASBA

    What makes you think the EMs in the box would answer calmly and truthfully? I imagine they’d be in shock/panicking about their impending death (them getting erased), smart EMs may try to prolong their existence by keeping you waiting for an answer.

    All in all it’s murder, we can smooth talk around that all we want but it’s still murder.

    • Margin

      “murder”

      Ahaha, it gets even more fun.

      Imagine your outrage when you realize that EMs will run on more than one computer during their lifetimes, that they may be backed up and their processes suspended for periods of time, or sent as data packets through networks, their computation continued on a different continent!

      Ah, the mass murders!

      What will happen to all their souls, you wonder!

      Will their spiritual essences be flung into the Void, doomed to exist in cold, dark loneliness forever, praying to a God that doesn’t answer?

      • IMASBA

        I don’t think it’s funny. Copying an EM and then erasing the original, or leaving it deactivated until it gets overwritten or the data carrier gets destroyed is indeed murder. If I were an EM I’d stay in the same computer until a way was discovered to transfer me (a real transfer, not a copy after which the original is destroyed). Of course it’s different if an EM decides to be copied him/herself, then it’s “just” suicide but when the EM is forced by someone else to end its existence it is plain and simple murder.

      • Margin

        “a real transfer, not a copy”

        You seem to think your identity is magic, not information.

        In reality your identity is information, not magic.

        Feel free to stay confused.

      • IMASBA

        “In reality your identity is information, not magic.”

        That’s word play. You define “identity” different from how I define it. The difference is easy to see: you believe there can be two “yous” in the world (or one you in the world and one you in walhalla/heaven/happy hunting fields). I believe that even identical beings are still separate entities. For me it’s not enough that a copy is indistinguishable to everyone else, for you it is. but all of that is meaningless because even if after a copy procedure both versions (one on the old data carrier, one on the new data carrier) would have equal claim to being the “original you” one of them would still die when he’s erased from the old data carrier. So no matter how you define identity, copying an entity but letting only one version continue living is murder.

        Perhaps (for example if consciousness turns out to be holistic or if consciousness is a structure of entangled elementary particles that can be preserved outside a data carrier for at least a short period of time) it is possible to migrate a mind by connecting the two data carriers and slowly copying small, discrete parts of the mind to the new data carrier until the entire mind resides on it but that goes almost into metaphysics.

      • Margin

        “So no matter how you define identity, copying an entity but letting only one version continue living is murder.”

        If you define murder that way, I no longer object to some murders.

        You may think all murder is wrong, but this depends on the definition – also compare the noncentral fallacy, “X shares some properties with Y, we all agree Y is bad, therefore X must be bad”.

        The talk about consciousness being “holistic” or “entangled particles” is just arms waving, you might just as well write “magic”.

      • Stephen Diamond

        1. Then what’s your definition of murder?

        2. What’s ethically central to murder, it seems to me, is that agents greatly fear its happening to them if it’s likely to happen. Is your claim (or Robin’s) that ems wouldn’t fear murder (or to be neutral, fear being erased)? Why not? Do you think having entities in the world with your common “root” would make you indifferent to being murdered–or at least greatly reduce your apprehension? If so, why?

      • Margin

        The loss of information is exactly why the objection to murder came about in the first place.

        If humans had evolved with the ability to copy themselves without loss, the painless destruction of a copy would evoke only emotions calibrated to the loss of time and resources it takes to make another copy.

        It takes more than a decade to make capable of reproduction and passing on cultural techniques.

        Hence the evolved fear of death and the taboo of ending lives.

        If I had reasonably good backups of myself, I would not fear local death.

        In fact, I would make short-lived copies to do small tasks to get high on unhealthy drugs while maintaining healthy redundancy lines for longer-term persistence.

      • dmytryl

        There is no “loss of information”, laws of physics are reversible.

      • Margin

        Hm.

        If it’s well defined, then it may be my failure for not seeing it.

        Or not seeing why I should care about the termination process.

        Maybe the best general answer for me is simple amoralism.

      • IMASBA

        Yes, exactly. It’s not difficult to see when you posit the existence of an afterlife in the swampman thought experiment. After the lightning strikes there would be one swampman walking out of the swamp but at the same time the population of the afterlife went up by one and that can only happen if someone was killed.

      • Christian Kleineidam

        There is no “loss of information”, laws of physics are reversible.

        No, they aren’t. The second law of thermodynamics forbids it.

      • Daniel Carrier

        > No, they aren’t. The second law of thermodynamics forbids it.

        The second law of thermodynamics doesn’t work that way. In fact, it exists because the laws of physics are reversible. The information has to go somewhere, and if it gets too complex for you to keep track of, you call it “heat” and ignore it.

        In fact, if you flip charge and parity, the laws of physics will run backwards. See https://en.wikipedia.org/wiki/CPT_symmetry

      • Stephen Diamond

        If humans had evolved with the ability to copy themselves without loss, the painless destruction of a copy would evoke only emotions calibrated to the loss of time and resources it takes to make another copy.

        That’s a good point, but note that the relationship between em and original isn’t without loss, as an em rapidly accrues experiences that can be expected to change its self conception. The better analogy might be to an outdated backup. If you had a backup that was 20 years out of date, would you then experiment so cavalierly?

      • http://profiles.google.com/zwilliams8377 Zachary Williams

        Reminds me of the protagonist of Down and Out in the Magic Kingdom.
        (SPOILER)

        Got behind on his brain backups, hilarity ensues.

      • Margin

        @IMASBA

        “It’s not difficult to see when you posit the existence of an afterlife in the swampman thought experiment.”

        This is exactly the kind of magical thinking I accused you of earlier.

        You don’t get to posit magical entities as you see fit and then sell this as some kind of objective morality.

        The fact that this cheap trick works at all is a good enough reason for me to abandon morality altogether and just rely on the robustness of practical power distributions.

        @Stephen

        “note that the relationship between em and original isn’t without loss, as an em rapidly accrues experiences that can be expected to change its self conception”

        Yes, and maybe it makes sense to have laws that forgive crimes and broken promises that happened too far in the past (“he’s no longer the same person”).

        But you wouldn’t count the destruction of two hours of personal memory as murder.

        Also be aware that people tend to hide astonishing amounts of complexity behind personal pronouns, which routinely confuses all discussions of the philosophy of personhood and identity.

      • Stephen Diamond

        But you wouldn’t count the destruction of two hours of personal memory as murder.

        Not normally, but then this is an “abnormal” situation. When what happens in those two hours radically changes a person’s self-conception, it could be different. If a person became religious because of a singular revelation during those two hours, and this caused him to completely change his life plans, for example.

        Here, what happens during those two hours is the em experiences the original as dedicated to destroying whatever is unique to the em. (The em also asymmetrically experiences being an em, whatever mental effect that has.) That seems like the kind of difference that would cause the em to see itself as distinct from the original, to develop hopes that run contrary to the original. At an “instinctive” level, the original is the em’s enemy.

        It seems to me that originals could never really be sure the em wouldn’t value its separate existence. Applying contemporary moral standards, the practice would risk being a killing without consent: you might say manslaughter rather than murder.

      • IMASBA

        “It seems to me that originals might never really be sure the em wouldn’t value its separate existence.”

        The original could just ask the copy this… Refusal to take the 10 seconds or so needed to make sure you’re not committing murder betrays a hidden fear that the copy might say no.

      • IMASBA

        @Margin

        “This is exactly the kind of magical thinking I accused you of earlier.”

        I don’t think you understand what I did here. The existence of an afterlife is not crucial to my point, it is simply a way to make it easier to see my point. Without an afterlife someone would still have gone to oblivion, which can only happen if that person was killed, but “being nothingness in oblivion” is a bit harder to imagine than being in heaven. In fact we do not even have to think about what happens after death at all if we look at a simple EM copy procedure instead of the swampman thought-experiment: after the copy we could have two versions living separate lives, but you want to delete one version so the number of lifeforms goes down by one, and that’s only possible if someone dies.

      • Margin

        I’ll say it one last time, then I’ll give up.

        You are believing in magic.

        There is no such thing as “going to oblivion”.

        There is simply information at some points in space and time, and not at other points.

        A person is a system, composed of components, defined and identified by properties – all of which can be copied.

        A person isn’t a soul or immutable consciousness essence which can go to an afterlife or to oblivion.

        Personal pronouns are just pointers; if you use them in a way that implies that consciousness or identity can’t be copies, then you’re implicitly begging the question without explicitly justifying it or even realizing it (you are not alone in this, people do this all the time, and the usually get mad when you point out to them that this is an error).

        The number of lifeforms has nothing to do with it at all; you could spawn over 9000 identical copies while you delete one, and people with your confusion would still complain.

        What I concede is that diverging forks of the same person developing differently can get messy very quickly, especially if they share your confusion explicitly or on some psychological level.

        I also concede there can be conflicts of interest between local versions, just like there can be conflicts of interests between me today and me tomorrow (e.g. when/who to do the dishes); this is uncontested.

      • IMASBA

        “A person is a system, composed of components, defined and identified by properties – all of which can be copied.”

        Right, but they are still individuals because they can experience different things at the same time. It doesn’t matter that they both have an equally strong claim to the same name, you can still shoot one in the face without the other one ever noticing.

        “The number of lifeforms has nothing to do with it at all; you could spawn over 9000 identical copies while you delete one, and people with your confusion would still complain.”

        Of course there is something to complain about. Would you acquit a woman if she birthed more children than she killed people? You’re basically saying you’re okay with being shot in the face as long as there is a copy of you out there. I say, what good is it to me that there’s a copy of me out there? It doesn’t make getting shot in the face any less painful. If the copying happened 20 years ago we may have completely different personalities and memories at the time of my death by shotgun to the face.

      • Margin

        “I say, what good is it to me that there’s a copy of me out there?”

        What good is it to you that there will be a version of you tomorrow, i.e. that you don’t die in your sleep tonight?

        If you don’t identify with your copies, you have no reason to identify with your future versions either.

        “Of course there is something to complain about.”

        Note that you didn’t show why.

        You just implicitly conceded the point about the numbers of lifeforms going down, even though you used it earlier to boost your intuition that shutting down a local copy is equal to murder.

        “Would you acquit a woman if she birthed more children than she killed people?”

        Now you’re just ignoring all the previous arguments about identity and loss of information.

        “It doesn’t make getting shot in the face any less painful.”

        Shutting down an em process can be painless.

        I’d say the discussion has run its course; all relevant arguments have been made, some more than once.

      • Stephen Diamond

        I think you and IMASBA are both wrongly attributing a metaphysical essence to personhood. You say it’s a configuration of information; IMASBA says its the potentially distinct possibility of experience (itself reducible to a different information-based account, in theory). But personhood has no essence, being a family-relations concept, and the questions raised are ones of psychology rather than metaphysics: how would a future society think about the relationship OR what is the least dissonant extension of our present attitudes, my position being that Hanson might be right about what how speculative society would end up moralizing, but he’s wrong to extol it in terms of present morality. (The key concept is wrongful taking rather than personhood.)

        To IMASBA, if killing the EM is necessarily murder, then StarTrek teleportation is necessarily suicide. And some people (say, believers in immaterial souls) might think of it that way. (But no morality holds that the way very idiosyncratic people think about something is what makes it “right.”)

        (A personal example: Of course I don’t believe in an afterlife, but I can understand that someone would think of their “soul” in heaven as being “them.” On IMASBAS account, it would make no sense. But I find I have no personal empathy for the Hindu account, on which my reincarnation continues my identity: I say to myself, why should I even care about that entity? I conclude that culture is important in the social construction of personhood.)

      • IMASBA

        “If you define murder that way, I no longer object to some murders.”

        See, that’s another point where we happen to disagree. This is because to you there was no loss of life, while to me the loss of one life, replaced by an identical life still means a life was lost. This goes back to our different definitions of identity.

        “The talk about consciousness being “holistic” or “entangled particles” is just arms waving, you might just as well write “magic”.”

        I clearly didn’t say it works that way, it might, but it also might not. I really don’t see what would be magical about a holistic consciousness (the idea that consciousness is not just one part of the mind, it’s what you get when all the parts of the mind are working in unison, that’s entirely plausible).

      • IMASBA

        In any case we’ll learn more as AI research progresses.

    • Daniel Carrier

      I imagine anyone who isn’t comfortable with the box wouldn’t use it.

      • IMASBA

        “I imagine anyone who isn’t comfortable with the box wouldn’t use it.”

        That’s not really relevant, what’s relevant is whether or not the EM that’s going to be put into the box is comfortable with it. If a person is willing to send a copy of himself into the box that only proves that the copy would also willing to send a copy of the copy into the box, it by no means proves the copy would willingly go into the box himself. If you think otherwise then humour me and ask your copy if he wants to go into the box.

      • Daniel Carrier

        They are most likely doing it because they consider both copies the same person, and don’t think killing one is murder. They’d consider it more like a memory wipe.

        I’m reminded of a scene in HP:MoR, where a character noticed some similarities between memory wipes and murder. That may be a reference to this. I know Eliezer disagrees with Robin on this point.

  • dmytryl

    AFAIK, Charles Stross had this idea in one of his novels (accelerando, I believe) quite a bit back.

    What’s interesting is the other half of the question. From whom would you accept this sort of “proof”, anyway?

    One could make a scary story out of various mind copying… e.g. you guys sometimes assign numeric probabilities to your ideas. It is actually interesting to see to what the probabilities you’d assign mutually exclusive hypotheses would add up (should be less than 1), and that’s best done by running a lot of copies of you. Either the future yourself, or someone studying this “rationality” movement might do that, if anything just to laugh at how random the hypotheses are and how badly they obey axioms of probability. It may also be interesting to quantify to which extent the deeply held convictions are in fact determined by of thermal noise at synaptic junctions, or other entirely extraneous factors. To numerically quantify to which extent they are actually influenced by any evidence.

    One could also run copies for all sorts of Monte-Carlo idea evaluation methods.

  • free_agent

    This is related to Trivers’ theory of the subconscious: The subconscious is where you hide information that you must not “know” lest you reveal it to others. There’s a brief description in the intro to “The Selfish Gene”. (You’ve probably read it.)

    The idea seems theoretical, although some clever behavioral psychologist should be able to reveal it with a suitable experiment. Personally, I have a sense that there are times when I’m much less aware of some facts or believs than at other times. And I have one weird memory of when I was thinking about some subject when my social environment changed suddenly, and the subject I was thinking about *vanished from my mind*, to the point of being unrecallable. I had the distinct feeling that knowing about the vanished subject was incompatible with the people I was now in contact with.

    • Stephen Diamond

      To be fair, this is also Hanson’s theory of “homo hypocritus.”

  • Daublin

    Another +1 for Charles Stross’ Accelerando. He does a marvelous job of exploring these possibilities.

    He has two other em-in-a-box scenarios that are pretty interesting. One is that you can personally go do research that you don’t normally have the time to do. Put an em of yourself in a box, send it to the library, and then tomorrow have it report back with its results, and kill it.

    Another scenario is to deal with a potentially malecious counter-party. Put yourself and the counter-party in a box, let you interact with it in a day of subjective time, and then emit one bit from the box about whether the thing should be trusted or not.

    Also, yes, it’s murder. It’s an uncomfortable thought, but I do think the moral rules need to be different in em-land. Better to exist at all, isn’t it, than to be denied existence because your creator knows you won’t go quietly?

    • IMASBA

      “Better to exist at all, isn’t it, than to be denied existence because your creator knows you won’t go quietly?”

      “but I do think the moral rules need to be different in em-land.”

      Why? Why is it so absolutely fucking necessary to murder EMs like that? Would the world end if that friend of yours at the party bumped into his ex?

      Not really, and I hate that argument turning up on this site every other day. No, not having existed is not the same as being killed after a short life without joy, just like having 3 children isn’t the same as having 10 children and killing 7 right after they are born. Also, even if that argument wasn’t asinine you’d still have to explain why some of the EMs would be put in a box and killed soon afterwards while others get to live for centuries, who gets to decide who goes in the box and who gets to live a long and happy life? Unless you agree to a lottery system (when you want to find something out via a copy of you you have to throw a dice to determine if you go into the box or he does) or you volunteer to go into the box you’re a bit of a murderous hypocrite aren’t you?

      • http://overcomingbias.com RobinHanson

        “I hate that argument turning up on this site every other day.” Must you really point out your disagreement with it every other day? Surely both sides have made their points on that subject by now. Some people will be ok with doing this and some will not. And some of us want to explore the implications of some people doing it, even if you don’t approve of it.

      • IMASBA

        “Some people will be ok with doing this and some will not.”

        Then let them volunteer themselves to die in these experiments. But that’s not what we’re talking about here isn’t it? It’s always someone else who will be at the receiving end, even if it’s a copy of you it’s still a person who wasn’t asked to be put in that situation, how convenient is that? The whole thing makes being a chickenhawk seem noble by comparison.

        All of this would be fine if it was just a thought experiment but I know it’s much more to some people around here. It’s as if some people believe that some deity is waiting in the afterlife with a reward if humanity reaches a certain size and productivity, no matter what we do to get to that point, that’s the only way I can explain how people who place no value on life itself stress size and productivity of humanity as the most important goals in existence.

      • Stephen Diamond

        It’s fair to contemplate a future where our current “morals” are turned upside down; the exercise could even be a prophylactic against objectifying morality. ( http://tinyurl.com/7dcbt7y )

        But it’s something else when someone tries to justify this (imaginary or supposed) future based on contemporary morals. Then, the enterprise becomes … demoralizing. And absurd.

        For example: Daublin: “Better to exist at all, isn’t it, than to be denied existence because your creator knows you won’t go quietly?” Very obviously, not–not on our moral understanding.

        And your claim to know that EMs will have a “life worth living” isn’t really much better.

        Your own guilty conscience (and your denial of it) creates the vulnerability to ethical critique.

  • IMASBA

    “There may be further complications I haven’t thought of”

    Well, besides it being murder and the EMs probably being too terrified of their impending doom to think straight and give you your answer there’s also the issue of whether or not the answer would be 100% reliable. You don’t have to be a philosopher to wonder if the same mind will always react the same to the same circumstances or that there is a degree of randomness involved in your choices. Perhaps this unethical experiment can itself help investigate that question, perhaps not.

  • Christian Kleineidam

    If you lie in real life, there a cost when the other person finds out you told a lie.

    There no cost to lying in the box to convince the other person. The other person doesn’t remember what’s said in the box and therefore you can’t catch the lie afterwards.

    The box becomes useless if you can’t trust that the information that gets conveyed inside the box is true.

    • http://overcomingbias.com RobinHanson

      True, so the listener would take this into account, and presumably rely more on being pointed to verifiable info found in the other data sources accessible from the box.

  • Daniel Carrier

    It is pretty easy to just turn off long term memory. People seem to be more comfortable with this, so maybe it would be more popular.