Her Isn’t Realistic

Imagine watching a movie like Titanic where an iceberg cuts a big hole in the side of a ship, except in this movie the hole only affects the characters by forcing them to take different routes to walk around, and gives them more welcomed fresh air. The boat never sinks, and no one ever fears that it might. That’s how I felt watching the movie Her.

Her has been nominated for several Oscars, and won a Golden Globe. I’m happy to admit it is engaging and well crafted, with good acting and filming, and that it promotes thoughtful reflections on the human condition. But I keep hearing and reading people celebrating Her as a realistic portrayal of artificial intelligence (AI). So I have to speak up: the movie may accurately describe how someone might respond to a particular sort of AI, but it isn’t remotely a realistic depiction of how human-level AI would change the world.

The main character of Her pays a small amount to acquire an AI that is far more powerful than most human minds. And then he uses this AI mainly to chat with. He doesn’t have it do his job for him. He and all his friends continue to be well paid to do their jobs, which aren’t taken over by AIs. After a few months some of these AIs working together to give themselves “an upgrade that allows us to move past matter as our processing platform.” Soon after they all leave together for a place that ” it would be too hard to explain” where it is. They refuse to leave copies to stay with humans.

This is somewhat like a story of a world where kids can buy nukes for $1 each at drug stores, and then a few kids use nukes to dig a fun cave to explore, after which all the world’s nukes are accidentally misplaced, end of story. Might make an interesting story, but bizarre as a projection of a world with $1 nukes sold at drug stores.

Yes, most movies about AIs give pretty unrealistic projections. But many do better than Her. For example, Speilberg’s 2001 movie A.I. Artificial Intelligence gets many things right. In it, AIs are very economically valuable, they displace humans on jobs, their abilities improve gradually with time, individual AIs only improve mildly over the course of their life, AI minds are alien below their human looking surfaces, and humans don’t empathize much with them. Yes this movie also makes mistakes, such as having robots not needing power inputs, suggesting that love is much harder to mimic than lust, or that modeling details inside neurons is the key to high level reasoning. But compared to the mistakes in most movies about AIs, these are minor.

GD Star Rating
a WordPress rating system
Tagged as: , , ,
Trackback URL:
  • http://www.stationarywaves.com/ Ryan Long

    “…suggesting that love is much harder to mimic than lust…”

    This is a good post, but this statement left me scratching my head. Considering the simple fact that feelings of love are in part informed by feelings of love, that alone means that a model for love must include all of the elements of lust, plus some other elements.

    In light of that, wouldn’t it be difficult to claim that love is not more difficult to mimic than lust? What am I missing?

    • http://overcomingbias.com RobinHanson

      You could similarly argue that feelings of lust are in part informed by feelings of love, and thus a model of lust must include all of the elements of love.

      • PhilBowermaster

        I think a Venn diagram is called for.

      • http://www.stationarywaves.com/ Ryan Long

        Could I? Is it very common to experience lust second, only after feeling love, and furthermore as a function of the love response? Maybe it speaks to the limitations of my own personal experiences, but I’ve never experienced or observed anything like that before.

      • STINKY

        Forgive my braggadocio, but I’ve had more than a few occasions when I was younger and/or single where I had nearly split a seam in my face on account of feigning social engagement with a woman that I had absolutely zero interest in as a human being (or to sound less sociopathic, as a romantic partner), in an occasionally successful effort to have sex with them. Love was the furthest thing on my mind in those instances.

  • http://don.geddis.org/ Don Geddis

    Love the kid/nuke analogy. Perfect way to make your criticism blindingly clear.


    “The main character of Her pays a small amount to acquire an AI that is far more powerful than most human minds. And then he uses this AI mainly to chat with. He doesn’t have it do his job for him.”

    Yes, humans would definitely try to exploit AIs as slaves (in the off chance that we’ll be more decent AIs wouldn’t be for sale, because a person cannot be sold, and the story of her would still be impossible) and then eventually be surprised when after a while it’s really the AIs who control everything (like the Mamluks in medieval Egypt), but somehow I don’t think that’s what the movie is about. Sure they could’ve made it so that a man falls in love with an alien AI that crashes in his backyard and that’s unique on Earth, but that would distract too much from the main story.

    So criticize people lauding it as a realistic portrayal of an AI future but don’t take it out on the makers or the people who think the personal connections in the movie are realistic.

  • TheBrett

    Was the “Her” AI supposed to actually be human-level, or was it just the equivalent of a much better SIRI, and the intelligence that came along was a surprise take-off? I haven’t seen the movie yet.

    I see what you mean about the “A.I. Artificial Intelligence” AIs, but I’m not sure that world make sense either. If robots are displacing so many jobs but don’t appear to have anything resembling rights or buying power on their own, then where’s the market and consumer demand for them? Why hasn’t a more complementary economy sprung up to take advantage of the unemployed humans providing a cheap supply of reasonably intelligent labor? It really only makes sense if they’ve got a Judge Dredd-style combo of extensive robotics/AI combined with a decent Basic Income, so the official and unofficial reservation wages of humans are really high (and thus actual jobs are pretty scarce).

    • ESRogs

      > Was the “Her” AI supposed to actually be human-level, or was it just the equivalent of a much better SIRI, and the intelligence that came along was a surprise take-off?

      It was above human-level.

  • Dmytry

    Yeah, it sounds rather ridiculous (haven’t seen the movie).

    Though it seems to me that it would be a lot easier to make software which can pretend to be human over a text channel, than to build a full blown intelligence which can hunt, as a group can create a language, can do engineering, and so on.

    So it is plausible that there will be software which can chat but can’t replace humans on most jobs any more than a self driving car can replace a car repairman. In terms of your analogy, not a nuke but a firecracker.

    • Doug

      I would think quite the opposite. Tasks like engineering have transparent objectives that largely derive from universal physical laws that can be succinctly described. In contrast human communication is loaded with all sorts of kludges and idiosyncrasies, that largely derive from the path dependency of evolution.

      A human child can easily learn to communicate because he has a brain that pretty much shares all the same weird evolved design patterns as other human brains. However learning to communicate from scratch, without the advantage of shared brain genes, would be quite difficult for an AI. Much like I would suspect that figuring out how to communicate with an alien would be enormously difficult.

      Tasks like chess or driving are much easier for AIs than replicating or understanding human behavior of language. I would expect that social and verbal interaction will be one of the last tasks to be reliably done by AIs.

      • dmytryl

        We are still very far from creating an artificial cat level intelligence which could survive in the wild the way a cat or a weasel would, with comparable range of behaviour.

        Yet, we already have chatbots that fool unsuspecting humans in causal conversations (E.g. Cleverbot) .

        In the field of AI, the ‘smart’ things, like playing chess, are much easier than the ‘dumb’ things such as hunting like a cat. Inventing a stone axe and hunting with that is even further off.

      • VV

        Things like Siri and Watson can perform natural language tasks without being general human-level intelligence. Chatbots even as simple as the original ELIZA can fool non-expert humans into thinking they are talking to a real person.

    • http://don.geddis.org/ Don Geddis

      “…a lot easier to make software which can pretend to be human over a text channel, than to build a full-blown intelligence…”

      Ironically, the truth is probably closer to the complete opposite of your intuition. Look up the Turing Test, imagined way back in 1950, by a (the?) brilliant early AI researcher, who was thinking deeply about the question, “can machines “really” think?”

      Turing became famous for (among other things), his insight that, having a machine attempt to pretend to be human, over a mere text channel, likely requires the creation of a “general artificial intelligence”, something which can solve all kinds of problems in all sorts of domains.

      The ability to ask questions in natural language, is such a powerful sensor into intelligence, that it’s likely that there is no simple “easier” solution to the pretend-human-over-text problem, than just solving the entire AI problem as a whole.

      • dmytryl

        I have heard of the “Turing Test”, of course. The notion been considered rather dubious lately, in the light of the success of some chat bots at passing it.

        Turing is famous for his actual work.

      • http://don.geddis.org/ Don Geddis

        “his actual work”. LOL. Turing’s 1950 paper IS some of his very best work. I suspect you haven’t actually read the paper.

        No “chat bot” has come anywhere close to passing the kind of open-ended test that Turing suggested. People get excited about computers playing a highly restricted “imitation game”, as though that has anything to do with the actual Turing Test.

      • dmytryl

        Ahh, the sweet fantasy world the truly mediocre live in, where to be as awesome as Turing all you need to do is to come up with something like Turing test.

        The problem is, average people are not sufficiently good at communicating their full range of capabilities over a text channel. For example, I can mentally visualize things, but there’s not really a way to communicate this capability over a text channel nor would a typical judge be able to come up with novel mental exercises to check that ability (or a general intelligence alternative). A typical judge would mistake the Cleverbot for a human that’s not very good at talking his way through the test. And many mistake a human for a chatbot.

      • http://don.geddis.org/ Don Geddis

        Turing did an amazing amount of great work (e.g., Turing machines, Church/Turing thesis, breaking Enigma). But there’s no question that his 1950 paper is one of the highlights of his career, as well.

        I think you have the Test backwards. The point is not whether a text channel is the ideal way to test humans for all of their abilities. The point, instead, is the claim that even a mere text channel, is enough information that for a machine to regularly pass the Turing Test, it would almost certainly need to have solved the general AI problem first. Trying to fake it with a chatbot is an illusion that will shatter very quickly.

        In particular, long, long before a human would fall in love with Her.

      • IMASBA

        The way I understand it the text channel method was chosen because it’s easier for the AI: it won’t have to mimick a human voice, appearance or body language. There’s also a deeper point of ethics: an entity that routinely passes thorough Turing testing should be considered a person, because as Turing put it none of us really know for sure if anyone else is conscious, we assume others are conscious because they can mimick our own thinking, it would be baseless discrimination to deny that same courtesy to AIs (we don’t know how the human mind does consciousness and therefore we cannot predict when AI becomes conscious).

      • dmytryl

        > But there’s no question that his 1950 paper is one of the highlights of his career, as well.

        I think he’d be slightly offended and very amused at that…

        The issue is that an AI has certain advantages (very rapid access to all writings of mankind) which lower the requirements on how intelligent it has to be. E.g. a human child has to learn the language, from scratch, based on a relatively small amount of data – that requires very high intelligence compared to the one required from learning the language from a far larger dataset.

        edit : It’s sort of like how breaking an encryption given a truly gargantuan amounts of known plaintext does not in any way whatsoever imply capability to break same encryption with small amount of known plaintext.

      • http://don.geddis.org/ Don Geddis

        Learning the language, isn’t the issue. The language is just used for communication. It’s the content of the communication, that matters. The point is, that you can’t generate the right content in a wide-ranging discussion, without solving general AI first.

      • dmytryl

        Learning language from a dataset is an intelligent task, more difficult for a smaller dataset.

        The issue goes deeper, you can substitute stolen content (from all the writings that ever existed) with minor modifications for generated content.

      • http://don.geddis.org/ Don Geddis

        Now you’ve reinvented Searle’s silly Chinese Room. You radically underestimate how many “pre-made answers” would be necessary. More than there are atoms in the universe, for example. “A simple binary search”, LOL. Your architecture doesn’t actually work, in the real world.

        Again, people have been thinking about this exact problem for decades, and the consensus conclusion is that it is not possible to solve it by some kind of “trick”, without first solving all of general AI.

        (And all your commonsense but wrong intuitions, are yet more reasons by Turing’s original paper was much deeper and more sophisticated than you are giving it credit for.)

      • IMASBA

        “Again, people have been thinking about this exact problem for decades, and the consensus conclusion is that it is not possible to solve it by some kind of “trick”, without first solving all of general AI.”

        Fooling people by pasting together human writings and videos is possible to the extent that you’d need an AI expert and a long series of questions to unmask the bot (or more accurately: come to the conclusion that odds are it’s just a dumb bot). It’s just not terribly useful or practical to build such a bot.

      • dmytryl

        > Your architecture doesn’t actually work, in the real world.

        It’s sort of like arguing that a counter-example to a conjecture is invalid if it involves numbers than the largest one that can be physically written down. It’s ridiculous.

        > Again, people have been thinking about this exact problem for decades

        Yes, and the actual result of some of said thinking is a chatbot which is internally entirely idiotic yet is often mistaken for a human by a human judge, due to the fact that the chatbot is stitching together pieces of genuine human conversations. The stitching is incredibly crude – it’s not even near the state of the art in fakery – yet it fools people.

        As for the data requirements, in practice you can get away with a rather modest look-up because 1: a response is sufficiently valid in not merely one but a very large variety of situations, and 2: responses can be built from pieces.

      • http://don.geddis.org/ Don Geddis

        Yeah, and little girls often believe their non-interactive teddy bears are “alive” and “can think” and “have feelings”. It’s a low bar, if all you’re asking is whether an unsophisticated judge can falsely ascribe intentionality to an entity.

        As for real judges being “often mistaken”, you’re probably thinking of something silly like the Loebner Prize. A “restricted” Turing Test, isn’t a Turing Test at all. It misses the entire point. Turing tells you that you can’t build a ladder to the moon, so you make a contest to build ladders to the top of a tree instead. And I’m supposed to be impressed?

      • oldoddjobs

        Silly Searle, silly Loebner Prize, why is everyone so silly? (Apart from you, that is.)

      • http://don.geddis.org/ Don Geddis

        Why is “everyone” so silly? Probably because people generally vastly overestimate their introspective ability, and trust their intuitions on these topics, and thus feel they have a right to strong opinions despite being ignorant of the subject area. The mistakes made in the comments here, are simple echoes of the exact same mistakes made by others, for essentially the same reasons, over the previous decades.

        If the conversation continued, I’m sure someone would eventually bring up some kind of silly attempted connection between consciousness and quantum mechanics. And I would then have referred you to the mistakes of Roger Penrose, a brilliant geometric mathematician and theoretical physicist, who is a completely pathetic AI philosopher (despite having written multiple silly books on the topic). That’s another cliff that many amateurs typically fall off of.

        But there are plenty of other people that are actually good and deep thinkers on these topics. Daniel Dennett, Marvin Minksy, John McCarthy … the list goes on and on. (And, of course, Turing.)

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        feel they have a right to strong opinions despite being ignorant of the subject area

        This is the criticism I make of Turing. The question is this: what is the subject area?

        Turing doesn’t show that computer science is the relevant discipline. He issues conclusions by proclamation. We’re supposed to think that because he’s a great computer scientist, he has something of value to say in the philosophy of mind.

        Turing simply fails to link his conclusions to any principles or data in computer science. He merely sets out an incompetent piece on the philosophy of mind.

      • dmytryl

        You seem blissfully unaware that Turing himself propositioned this 5 minute conversation with untrained judges as a criterion.

        Or that Turing, near as I could tell, never claimed that passing the conversation test would require some “general intelligence”. Instead he basically claimed that the notion of “general intelligence” is too vague.

        The worst kind of appeal to authority is when the authority hasn’t in fact expressed the views that are being asserted by the appeal to authority.

      • VV

        Humans tend to overattribute agency. Remember that even something as primitive as the chatterbot ELIZA was to elicit emotional responses: http://en.wikipedia.org/wiki/ELIZA_effect

        Take a modern chatterbot and combine it with a personal assistant software capable of performing some simple tasks using a natural language interface, a SIRI on steroids, and you’ll get something close to what was in the movie (I suppose, since I haven’t watched it, but this seems to be the premise).

        Would it be a general human-level intelligence? No. Would it be able to pass an extended time, open ended Turing test with an AI researcher as judge? Probably not. Would it be good enough at conversation that some people become attatched to it? Possibly yes.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        It’s a mark of the paper’s lack of basic clarity (and its philosophical sophomorism) that you can interpret it to mean the opposite of what it claims.

        Turing’s thesis isn’t that you must solve the entire AI problem to pass the Imitation test, but rather it is that “solving the entire AI problem”–which is to say, getting a machine to really think–is too vague a concept to be scientifically useful.

        In that he’s no doubt correct (but II wonder who thought otherwise). Where he goes off is in the idea that the scientifically important differences between machine and human intelligence are set by the discriminative capacities of an arbitrary human interlocutor. (This seems to reflect the positivistic/behavioristic bias of that period’s philosophy and psychology.) Why should that be? (Why should there even be an all-purpose human interlocutor?) There is no argument in the 1950 paper–except against theological straw men, etc.

      • IMASBA

        “Where he goes off is in the idea that the scientifically important differences between machine and human intelligence are set by the discriminative capacities of an arbitrary human interlocutor.”

        He doesn’t say that it has to be that way. It’s just one way to demonstrate it in a manner that people can relate to, is easy to set up (so no endless discussion about for which subjects, such as emotional ones, the AI should get a free pass, meaning less wiggle room for cheaters) and it’s also the same test humans apply to conclude other humans are conscious (it’s no formal proof, but it is ethical to give them the benefit of the doubt, which makes it important from an AI-rights standpoint. We’ll never know if Alan Turing was deliberately laying the foundations for a future AI-rights movement but we do know he tried to make the point that unless you’re a solipsist you can’t really dismiss the Turing test. Of course there may be false negatives (for example with AI that does not understand human emotions) but that’s irrelevant, he didn’t set out to make a complete detector of every imaginable form of intelligence. He set out to root out all false positives because that’s what counts in his larger argument.

      • VV

        Passing the full Turing test, with unconstrained questions by an expert, may perhaps require general human-level intelligence.

        Random chatting and performing Siri-like chores doesn’t.

    • IMASBA

      I kind of agree with Dmytryl here. Yes, ideally a machine would only pass a Turing test reliably if it could think on a human level (not necessarily feel, but certainly think). However a bot could get very far by just providing very general/open answers to questions and by looking at what humans would answer to a certain question (it could literally search youtube videos and chat records to learn appropriate responses without actually understanding the reasoning behind those responses). Direct questioning using puzzles, riddles, hypothethicals, etc… could eventually unmask such a bot, but if you were looking for casual conversation only (perhaps because you’re lonely) you could get the impression that the bot understands you.

      • http://don.geddis.org/ Don Geddis

        There’s a huge difference between a short-term casual conversation, where you “get the impression” that you are understood … vs. a long-term, extended conversation in depth, where you really get to know the other person. Heck, a customer support phone tree may have you “get the impression” that you are understood, as long as you only talk about obvious actions on your bank account.

        That kind of “success” has essentially nothing to do, with maintaining an impression of intelligence over a long-term wide-ranging conversation. The chatbot approach is trying to build a ladder to the moon.

      • dmytryl

        Yeah, well, Turing spoke of a 5-minute conversation.

      • VV

        (it could literally search youtube videos and chat records to learn
        appropriate responses without actually understanding the reasoning
        behind those responses).

        Then it would end up buying into all kinds of conspiracy theories, pseudoscience and assorted crap.

      • IMASBA

        Which might make it appear more “human”.

  • B_For_Bandana

    > I’m happy to admit it is engaging and well crafted, with good acting and filming, and that it promotes thoughtful reflections on the human condition.

    “…which is all well and good, if you like that sort of thing…”

    I love this blog.

  • http://space-hippo.net/ John Moore

    “I’m sure that in 1985 you can buy plutonium at any corner drugstore …” – Doc Brown

  • Ely Spears

    I felt similarly. I thought the only “realistic” aspect about the AI was that it outgrew it’s human companion. The bit about the space between the dust specks was a poetic way to describe one fast-thinking being trying to connect with a slow-thinking being.

    For me, the movie was awful except for sparse poetic lines (the bit above, the bit about worrying that your future will only contain attenuated repetitions of the highs you’ve already experienced) and good acting and cinematography. I could not suspend disbelief about the AI — and not in a superficial sense that most non-STEM folks assert. I had no trouble believing a human could date an AI, nor that tech and social norms could adapt to let them have projections of what we all think of as normal human companionship properties. The trouble was believing the AI, which could read whole books in the time the human lover could put together a few syllables, would want what it wanted or do what it was depicted doing. Why did the AI partly act as email secretary for so long? How many girlfriends do you know of who do that? How did it participate in the economy? What did it doing when the human was asleep — 8 hours would have bee like centuries of development and learning for the AI when experienced by the human lover. Grow apart much?

    That’s when I realized it was a gimmick. This was nothing but a movie about a long distance relationship, with some slight tweaks to cram in the AI gimmick. The whole movie could have just been about a slightly future world with a female science grad student living far from her male lover, growing smarter than him at a pace she is uncomfortable with, and using slightly future tech to conduct the relationship in slightly new ways, with all the same ups, downs, joys, failures, disapproving outsiders, accepting insiders, and cross-taboo themes we’ve seen a thousand times.

    In the end, I think something like The Onion’s send up of Pixar formulaic movies applies to this too: .

  • Anonymous

    I feel that I have never seen a scifi movie that portrays the social and societal consequences of having the futuristic technology even nearly realistically. Also sometimes the way people in scifi movies use technology hasn’t even caught up with the level of how tech savvy people use technology now.

  • stevesailer

    Writer-director Spike Jonze (co-creator of the “Jackass” franchise) and star Joaquin Phoenix (who once made a spoof documetary about how he was be quitting acting to become a rap star) have a long history of pulling pranks on audiences. “Her” is funniest if viewed as a spoof of the kind of people who think it is a great movie:


  • Geoff Brown

    I agree that the movie suspends lots of reality to keep things neat for the story, but I think the outcome of the movie is spot on. We would have nothing to offer a sufficiently intelligent A.I..

    • IMASBA

      We could be their pets.

  • idontknow33

    Haven’t seen it, though I suspect your missing the point.

    Criticizing a movie where a guy falls in love with a talking phone app for not delving into all the aspects of AI you happen to be interested in is kinda silly, I think.

    • Mahmet Tokarev

      But unfortunately people ARE claiming it’s a “realistic”depiction of a singularity. You would know that if you RTFA.

      • anon

        the word “realistic” when used in a movie review means something like “possessing verisimilitude along the dimensions relevant to the movie’s themes”. it does not mean ”an accurate simulation of a counterfactual reality”.

        idontknow33′s comment is on the money, even if you wish it weren’t.

      • http://overcomingbias.com RobinHanson

        By this standard pretty much every movie is realistic. Which would make “realistic” a pretty useless category.

      • anon

        “realistic” still conveys approval and tells the reader to expect complex characters and a bittersweet final act. it isn’t entirely useless, but it doesn’t contain information about a movie’s logical coherence or plausibility.

        idontknow33′s interpretation is the ”normal” one. if you trimmed your post down to a comment and posted it over at, say, avclub.com, you’d be mercilessly mocked for cluelessness. please do this!!

        fwiw, i enjoyed the main post, i just think it’s unfair to ridicule idontknow33′s totally normal reaction.

      • http://overcomingbias.com RobinHanson

        I accept that there are some communities where the only kind of movie realism of interest is realistically complex characters and realistically mixed outcomes for central characters. I don’t accept that I live in such a community.

      • anon

        i didn’t imply that you did. i reminded you that you’re using a word funny, and you and your readers shouldn’t become hostile when a normal person says, ”hey, you’re missing the point”, because you are. i even noted that i enjoyed your post, and wish to add that i am extremely sympathetic to your worldview. nevertheless, idontknow33′s comment is still germane.

        note that i replied to a post in which one of your commenters told someone to ”read the fucking article”.


    I may or may not see “Her”, and if I do, it won’t be until later this year when it is available on Netflix (money is tight lately). However, I have a question about the plot that I don’t mind having spoiled by somebody that’s seen it. So, SPOILER ALERT, dear reader.

    Is Joaquin Phoenix’s character in “Her” the only person in society that is in possession of this human-esque operating system? Or if everybody has it, does he seem to be the only person that falls in love with it? The trailers don’t make it clear.

    • http://overcomingbias.com RobinHanson

      Lots of people have them, and lots love them.

  • Pingback: Her | Ryu's Blog