Ted Chiang’s new book Exhalation has received rave reviews. WSJ says “sci-fi for philosophers”, and the Post says “uniformly notable for a fusion of pure intellect and molten emotion.” The New Yorker says
Chiang spends a good deal of time describing the science behind the device, with an almost Rube Goldbergian delight in elucidating the improbable.
Vox says:
Chiang is thoughtful about the rules of his imagined technologies. They have the kind of precise, airtight internal logic that makes a tech geek shiver with happiness: When Chiang tells you that time travel works a certain way, he’ll always provide the scientific theory to back up what he’s written, and he will never, ever veer away from the laws he’s set for himself.
That is, they all seem to agree that Chiang is unusually realistic and careful in his analysis.
I enjoyed Exhalation, as I have Chiang’s previous work. But as none of the above reviews (nor any of 21 Amazon reviews) make the point, it apparently falls to me to say that this realism and care is limited to philosophy and “hard” science. Re social science, most of these stories are not realistic.
Perhaps Chiang is well aware of this; his priority may be to paint the most philosophically or morally dramatic scenarios, regardless of their social realism. But as reviewers seem to credit his stories with social realism, I feel I should speak up. To support my claims, I’m going to have to give “spoilers”; you are warned.The shortest story, “What’s Expected of Us”, describes the consequences of a “predictor” machine. It has one light and one button. The light turns on briefly exactly one second before the button is pressed, proving that no one has free will to choose to see the light and not push the button. The story claims that about a third of those who play with these machines are so distraught to learn this that they must be hospitalized within a few weeks because they refuse to eat.
That’s a crazy prediction; 79%of the 1087 I polled say that in this situation <1% of folks would have to be hospitalized. Lots of us already don’t think we have free will, and have no trouble eating. And abstract considerations rarely make any dent in most folks’ behavior. Also, the key “negative time delay circuit” could be used to very make powerful computers and communicators; it wouldn’t just be used to make people feel weird about free will.
In “The Great Silence”, an intelligent parrot says
One proposed solution to the Fermi paradox is that intelligent species actively try to conceal their presence to avoid being targeted by hostile invaders. Speaking as a member of a species that has been driven nearly to extinction by humans, I can attest that this is a wise strategy.
Actually, I’m pretty sure that parrots are less likely to go extinct if they reveal to us an ability to articulate their analysis of the Fermi paradox. They might get enslaved, but they wouldn’t go extinct.
In “Omphalos”, a world of Christians who have direct evidence that creation happened learn that Earth may have been practice for some other planet God cares more about. Our narrator has a “crisis of faith”, and it is suggested but not directly stated that many others feel similarly. In fact, few would be disturbed; few actually base religious beliefs on such concrete evidence.
In “The Truth of Fact, The Truth of Feeling”, we hear two stories. In one, a “primitive” people who first encounter text records prefer the account from their verbal history of their people, even when contradicted by text records. In another, a future person newly exposed to easily searched video records of his personal history uses it to learn he had misremembered of a key life event. Because it says almost nothing about the typical use of such records, I find no fault with his depiction, other than his not exploring the many other foreseeable implications of such tech. I also have little to complain about the story “Exhalation”, as it says little about social behaviors.
In “The Merchant and the Alchemist’s Gate”, an ancient Baghdad merchant has built a mirror-sized object that functions as a time portal, allowing one to travel to either 20 years in the future or the past, depending on which side one uses. He has left another portal with his son in another city. When he likes a customer in his shop, he sometimes offers to let them use his portal. He has done this for at least 20 years, and the portal says he will continue to do so for another 20. The merchant doesn’t charge for the use of his portal, has no security protecting it, doesn’t ask users to keep it secret, and his users sometimes tell others about it. In fact, this whole story is told from the point of view of a user telling a king about it.
It is pretty crazy that an ancient merchant who had devoted such a tiny fraction of his life to the effort could build such a thing, especially without being part of a community where others are close to doing what he has done. But it is every more crazy that while such a portal could be used to make a fortune, not one who hears about it steals it for this purpose. 64% of the 433 I polled say that there’s a <1% chance of this outcome, if such a portal were treated this way by its owner.
In “Anxiety is the Dizziness of Freedom”, a new kind of device can split the universe into two branches that start out the same but then deviate due to random fluctuations, and then allow communication between these two branches. In the story, these devices are used almost entirely to let people find out about which aspects of their personal lives are how random. At some point it is mentioned that an author sometimes sells the writings of their alternative copy, but this is presented as a rare usage.
Actually, this ability to exchange info between branches allows for exponential decreases in the cost of mental work. If a task can be broken into subtasks that produce info when completed, you can assign different tasks to each of N branches, exchange info to combine their results, and complete such tasks N times more cheaply. So vast resources would go into using such devices this way, the economy would be drastically restructured to support this, and the economy would soon get much richer. Chiang misses all of this.
The premise of “The Lifecycle of Software Objects” is that “artificial intelligence is going to require the equivalent of good parenting”. That is, human-level AI can only be created by human parents raising an emotionally-immature AI child for twenty years via time-intensive loving care; no shortcuts are possible. Businesses start the project, but quit when they realize how long it may take.
Only a handful of loving parents continue as long as it takes, and resist the temptation of evil businesses who offer money to exploit their beloved children. Even when these AI kids have college level inference and debating abilities, they still have childlike grammar, and parents see them as not mature enough to make key life choices. Their parents are proud to have sacrificed to keep these kids from working, and to have made them too “willful” to do menial jobs like butler.
These AI kids have been raised mostly in a particular virtual reality (VR) world also popular with humans, with rare ventures into our reality via robot bodies. When most humans switch to another VR world, it is said that that these kids can’t move unless the code that runs their brains is ported to run this new VR software system. A port said to be too expensive for these parents to afford. So these kids are stuck living alone in the old VR world.
For some unexplained reason parents refuse to make many copies of their kids so they could have lots of company there. Also unexplained is why they can’t just connect these kids to this new VR world via the robot body interface connected to the interface that humans use to experience it.
I agree with the 69% of the 753 I polled who say that there’s <1% chance that the first way to achieve human-level AI is via human parents raising AI kids for twenty years. Even less likely is that AI requires 21st Century western elite style parenting. Chiang again seems to focus on a socially unrealistic scenario that lets him make the very dramatic philosophical or moral points.
My poll results suggest that it is not just professional social scientists who are aware of most of these social unrealism. Which suggests that many, even most, of the other book reviewers were aware of it. So why didn’t any mention it, even as they praised Chiang for carefully thinking through his story premises?
I hear Hollywood is working on movies based on two of these stories. Alas the one movie so far based on a Chiang story, Arrival, is based on the most unrealistic premise of any of his stories, that learning a new language could let us directly “remember” the future. That’s a bad sign re future movie realism.
That's a fair point. I think I was reacting more to the tone of the post than the content. But, reading again, it doesn't seem like your intention was to be pejorative. My mistake!
You see the only possible reason for a book review that discusses the realism of SF is to "belittle" it?