Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
Tagged as: ,
Trackback URL:
  • zarzuelazen27

    The questions of consciousness and personal identity can only be resolved by having a scientific theory of consciousness. Only when such a theory was available and confirmed by a majority of the scientific community would I be prepared to step into the transporter and agree that Em’s were conscious.

    It’s not enough just to take a concrete ‘operational’ view of things. We need a *theoretical* explanation that satisfies us. Science is not just about prediction of external observables, it’s about *explanation*.

    A proper *explanation* of personal identity and consciousness needs to integrate all the concepts from neuroscience into a single coherent framework that results in *new* high-level concepts that refer to mental properties directly. The theory needs to explain the past data and make correct novel predictions using these new concepts, in such a way that the new concepts are shown to be indispensible to the explanations.

    Since there’s no such theory at present, no, I certainly would not risk getting to the transporter currently. But when such a scientific theory *does* become avaliable and is accepted by a majority of the scientific community, I’d trust what the theory tells me. If it says that my identity is preserved stepping to the transporter, that’s good enough for me.

    In other words: the widespread skepticism about Ems and the nature of personal identity is based on the lack of widely accepted scientific explanations of these things, not the absence of concrete details.

    See this ‘Closer To Truth’ 10-minute video interview with David Deutsch where he explains what constitutes good explanations. The high-light is his ‘take-down’ of Bayesian Inference (‘statistical methods’) in the last couple of minutes.

    • Dave Lindbergh

      It would certainly be satisfying (and perhaps reassuring) to have a robust scientific theory of consciousness.

      But we don’t so far.

      So how do you know that when you go to sleep at night, it’s the same *you* that wakes in the morning?

      I think you don’t. Yet I suspect you sleep every night without much worry about it.

      How is that different from stepping into the transporter?

      • arch1

        Choice and familiarity for starters. Also the swapping out of every atom in your body.

      • 401

        Experiments suggest that thinking generally doesn’t cease during sleep; sleep seems like a blank stretch of time because memories aren’t laid down.

        General anesthesia might be a better point of reference.

    • Only when such a theory was available and confirmed by a majority of the scientific community would I be prepared to step into the transporter….

      So, even if it’s been done countless times and the psychological sameness of the copy and original have been substantiated from every empirical angle, you would refuse to enter the transporter unless you could theorize your identity? Because your copy might not really be you? You fear you have something to lose, although you don’t know what it is?

  • > Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions.

    The uterus thing is pretty reasonable a fear; I mean, my neighbor’s dog died from bouncing around on a boat and getting his intestines/stomach flipped (bloat/twisted stomach/gastric torsion is apparently a surprisingly common killer of dogs), and early railroads were far from a smooth ride. You also can’t deny that trains do blight nearby crops through pollution and damage (anyone remember brake sparks and Coase?) which is why they need larger right-of-ways and part of why railroads go hand in hand with large governments and eminent domain and land grants. As far as ‘prevent passenger breathing’ goes – that is actually a myth, and specifically, it is a myth attributed to Dionysius Lardner (the link doesn’t name him but I can tell because it includes the ’20 mph’ telltale bit); considerable effort by myself and others has not turned up any such quote before the 1980s and evidence that it was confused with genuine and reasonable concerns of Lardner (for example, about lack of ventilation in railroad tunnels, which, like in mines, has killed many people), see So a classic urban legend I am a little surprised to see repeated in _History Today_ (incidentally, ‘bicycle face’ is another urban legend which fits the ‘absurd Victorian fears of new technology’ template). And of course, trains certainly did break up communities and assist the transition to modern laxer forager moral standards; we think that was a good thing and much of the point, but fears of that were not wrong, any more than aboriginals would be wrong in fearing the introduction of roads, cars, and modern agriculture will destroy their traditional way of life. They’re wrong on values, not facts.

    • OK, I crossed out the part about breathing problems in trains.

  • Garrett Lisi

    It’s more effective to ask for forgiveness than permission.

  • If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

    I haven’t seen this asymmetry between the recent past and recent future noted in the CLT literature. (A priori, they are equally distant.)

    The Walker link is broken. [Seems to me the post-transport certainty might be implied by the question: if it’s indeed you, then identity has been maintained.]

  • Vamair

    I’m still skeptical about the feasibility of the “human-like ems under high restrictions” scenario. Ems are not at all the optimal algorithm for doing their jobs. A large part of human brain is about controlling the body and processing a lot of signals from it. Most of these signals along with the whole body are dead weight if you care about algorithm efficiency. Same with many of humans’ emotions and introspection. So either they’re rapidly displaced by designed-from-scratch algorithms or by modificants (~ems with unnecessary parts cut out) resulting in a nonhuman algorithm swarm or they’re not under strict resource limitations and can use the resources to model forests and penthouses and stuff which only need a tiny part of the resources required to run an em.

    • Brain parts dedicated to sound and sight processing might be greatly shrunk. Emotions and introspection seem to be useful in general for social creatures – why assume the future has no use for them?

      • Vamair

        Unless the optimal algorithm for a specific task is human I’d expect them to lose any humanlike traits with time. Not by themselves, of course, but by being modified or replaced with better algorithms. I’d expect human brains to be awful according to the “job done per processor instruction” metric as they’re not at all optimized for that by evolution. Modelling the whole brain instead of a much simpler “do-the-work subroutine” is a huge waste of resources. I’d imagine a developed em corporation as a simple “contract-drafter” AI, a simple “tester” AI instead of introspection, and so on. The AIs may be modified ems or be written by first ems. Maybe if we add a high-speed mind-to-mind communication to the equation we still get something like sentient corporation even though none of its workers are sentient, but that’d just make it a nonhuman AI with all of its problems.

  • I don’t like the sound of this:

    …a significant number of humans would accept destructive scanning to become ems.

    Maybe I misunderstand though. What is destructive scanning? A link or parenthetical explanation would be helpful. If destructive scanning mean physical death, then we are back to the same old uploading consciousness trope of science fiction and transhumanism, which has not been resolved to my satisfaction.

    My hesitancy about transporters is based on a scene from one of the original Star Trek movies, where there was a malfunction, and reassembly at the destination was botched, because the people arrived “turned inside out”, as Scotty described it. I acknowledge that that is a safety concern rather than a philosophical one, of whether one’s identity (fundamental essence) is preserved.

    Returning to destructive scanning: Thank you for this post, @RobinHanson:disqus The introduction, about contrarian perception of futurist tech, stirred memories of another movie, Logan’s Run. Citizens of that seeming utopia were led to believe that at age 35, they were transformed to another higher plane of existence through a public ceremony. In fact, they were just killed for pragmatic reasons, e.g. population control. That’s another reason to be skeptical of destructive scanning. Even if the tech worked, could those who were in charge of the implementation be trusted?

    • In a destructive scan, the original body is destroyed in the process of creating the scan.

      If you are told that people who walk through a door enter aa higher plane, but you never heard from them or have any evidence that they ever have any effect on anything, you are right to be skeptical. If you can talk to them on Skype all the time, their actions have clear impacts on things you care about, and in both ways they seem like the people you knew, you’d be a lot less skeptical.

  • Pingback: Overcoming Bias : Avoid “PostHuman” Label()

  • Pingback: Overcoming Bias : Fuller on Age of Em()