We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.
Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.
The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.
A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.
This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.
For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.
Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs,
prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.
Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.
Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.
When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.
Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.
Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.
Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).
In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.
I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.
Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.
Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.
The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.
So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.