Egan’s Zendegi

Greg Egan is one of my favorite science fiction authors, and his latest novel Zendegi (Kindle version costs a penny) seems to refer to this blog:

“You can always reach me through my blog! Overpowering Falsehood dot com, the number one site for rational thinking about the future—”

That is Nate Caplan, a self-centered arrogant rich American male nerd, who creepily stalks our Iranian female scientist hero Nasim Golestani, an expert in em (brain emulation) tech. Nate introduces himself this way:

I’m Nate Caplan. My IQ is one hundred and sixty. I’m in perfect physical and mental health. And I can pay you half a million dollars right now, any way you want it.

Nate wants to pay so he can be the first em:

It’s very important to me that I’m the first transcendent being in this stellar system. I can’t risk having to compete with another resource-hungry entity; I have personal plans that require at least one Jovian mass of computronium.

Nasim naturally despises Nate.

So is Nate Caplan inspired by me, by my famously libertarian colleague Bryan Caplan, or by Eliezer Yudkowsky, who was my co-blogger back when Egan wrote this book?

Consider that Egan’s book also contains a Benign Superintelligence Bootstrap Project, clearly modeled on Eliezer’s Singularity Institute:

Their aim is to build an artificial intelligence capable of such exquisite powers of self-analysis that it will design and construct its own successor. … The successor will then produce a still more proficient third version, and so on, leading to a cascade of exponentially increasing abilities. … Within weeks—perhaps within hours—a being of truly God-like powers will emerge.

This institute is backed by an arrogant pompous “octogenarian oil billionaire” Zachary Churchland. To say more here, I’m going to have to give spoilers – you are warned.

Both Churchland and Caplan end up succumbing to illness and being cryonically frozen. No other characters even consider cryonics, apparently seeing it as distasteful, and revival as a very long way off if possible at all. Everyone with any sense in the book treats the superintelligence folks as complete idiots, and after years spending billions:

The sum total of their achievements had amounted to a nine-hundred-page wish-list dressed up as a taxonomy, a fantasy of convenient but implausible properties for a vast imaginary hierarchy of software daemons and deities.

While Nate Caplan’s “entire world view had been molded by tenth-rate science fiction,” he is at least somewhat reasonable. Nassim ends up choosing to team with Nate, who introduces Nassim to the key missing tech she lacked:

Side-loading is the process of training a neural network to mimic a particular organic brain, based on a rich set of non-intrusive scans of the brain in action.

Together they make side-loads that successfully emulate parts of human brains, such as an athlete’s sport skills, in order to populate virtual reality games with human-like players. Nasim starts to have second thoughts when she hears that Nate will try to use side-loads to displace half a billion human jobs. A league of powerful terrorist hackers then distrupts Nasim’s games, and issues demands:

It’s unethical to create conscious software that lacks the ability to take control of its own destiny. … That’s where we draw the line: no higher functions, no language, no social skills. … If you want to make something human, make it whole. If you want to enable people to step from their bodies into virtual immortality, perfectly copied, with all their abilities preserved and all their rights intact… go ahead and do it, we have no problem with that. … But if you want to put humanity into a cheese grater and slice off little slavelets to pimp to the factories and the VR games, well … then you’ve got a war on your hands.

Nasim isn’t convinced at first:

None of the side-loads can be conscious in the human sense. They have no notion of their own past or future, no long-term memory, no personal goals.

But at the very end of the novel Nasim is persuaded to join this hacker league to try to make ems illegal. What changes her mind? She tries to make a more complete side-load of Martin Seymour, who is dying of cancer, so the copy can guide Martin’s orphaned son. After extensive testing Nasim concludes:

Either Virtual Martin felt nothing, or he felt exactly what he claimed to feel: love for his son, acceptance of his limitations, and contentment with the purpose for which he’d been brought into existence.

The copy is quite intelligence and articulate – so far, so good. Then Martin tests his copy, named Jack, by pretending to be his son Javeed, inside a virtual reality game.

“You fucking worthless piece of shit!” He tore the metal helmet from his head and threw it on the ground; his face was contorted with anger and disgust. “Baba, it’s just a game,” Martin pleaded. “Are you my son?” Jack raged. “Is this what that fucker Omar did to you?” “Baba, I’m sorry—” Martin stood his ground as Jack walked up to him and started flailing impotently with his fists at Sohrab’s giant body. Jack sank to his knees. “Is that what I taught you? You couldn’t help yourself, even when he begged for his life?” He clawed at the dirt. “What am I, then? What am I doing here?” He struck his head with his fists, distraught. “Baba, no one’s hurt, it’s just a game,” Martin insisted. He shared Jack’s revulsion at what they’d both witnessed; he had known full well the feelings that his act would provoke. But he was sure he could have held his own response in check for Javeed’s sake; he could have stood back from his anger and found some gentler rebuke than this.

So because in a situation of extreme emotional stress one copy didn’t have the self-control that its original believed (perhaps falsely) that he would have in such a situation, all ems must be illegal. After all, this copy, horror, used swear words! Even though we have many millions of quite conscious and social animals enslaved to humans, it is an abomination for any such creature to have once been part of a human. No evidence is offered than any side-loads had suffered yet, or that any substantial fraction of future ones would suffer – the mere possibility that some of them might suffer at the hands of profit making firms seems to be enough.

Egan presents reasonably rich characters, and has a reasonable grasp of the relevant technical issues, both regarding emulations and super intelligence. Egan presents a plausible scenario of scientific and business progress – while I think full emulations would mostly displace partial ones when available, partial ones could certainly keep a place.

However, Egan seems unreasonably idealistic when it comes to his imagined hacker league – they are organized implausibly early and well compared to the businesses they oppose. And Egan’s ethics seem an incoherent muddle to me, though alas that doesn’t make him worse than average. He seems to have a species purity obsession, disgusted by any making of non-human minds from human minds. And his presentation of pro-cryonics, pro-superintelligence, and especially pro-em characters seems a bit mean-spirited to me, no matter which of us inspired those characters.

GD Star Rating
loading...
Tagged as: , , , , ,
Trackback URL:
  • http://www.gwern.net gwern

    So is Nate Caplan inspired by me, by my famously libertarian colleague Bryan Caplan, or by Eliezer Yudkowsky, who was my co-blogger back when Egan wrote this book?

    Composite – after all, none of you 3 have half a mill to throw around like that unless I’m greatly mistaken.

    If I had to guess, I’d say the money is Peter Thiel, the IQ is Eliezer, the health is Kurzweil, and the weirdness may be you.

    • Mark Plus

      I haven’t read the novel, but the character’s self-description made me think of Max More + money.

    • Jeff M.

      gwern, do you say this because Eliezer likes to talk about his IQ (which has not been tested by a reputable psychometric test at anything near his 99.99+ percentile claim), or because you think his IQ is higher than Robin Hanson’s? Robin’s work-product is blatantly superior to Eliezer’s, so other than the whole “narcissists are generally more persuasive” angle, I’m not clear on why you would imply such a thing.

      • mjgeddes

        Robin of course is a top-class economist, physicist etc, important breakthroughs in the field of prediction markets etc,

        Eliezer by contrast…ah… well, he did succeed in making a name for himself in the field of ‘Harry Potter fan-fic’.

      • Dmytry

        I also find Hanson to be much smarter in the socially useful way. It may be that Eliezer was ‘smarter’ and figured out that he did not need to study, that he can just talk people into giving him money to ‘work on AI’ . But as of now, Eliezer did never study anything. Autodidact is what you get when you take ambitious person and strip them of opportunity to pursue education. People dropping out by their will are not autodidacts, they just don’t want to study.

  • http://tjic.com TJIC

    > And Egan’s ethics seem an incoherent muddle to me

    Egan was one of my favorite SF authors for a hot minute until his anti-free-market biases became too grating. The bit that finally turned me off entirely was one novel where the evil billionaires were hacking their genome so that they could exterminate the human race and then survive in the post-apocalypse by eating tires, or somesuch.

    Why did the evil billionaires want to hack their genome and exterminate the rest of humanity? Because they were EVIL CAPITALISTS.

    OMG.

    Let us model a billionaire as a perfectly narcissistic ego driven loon (a defective model, I add). What delivers this billionaire the best standard of living and the most ego gratification? Appearing on the cover of Fortune magazine every other week and having a society of billions who cater to their every whim, cook them astounding meals, and design them stunningly beautiful homes, custom cars, and yachts? …or wandering around in a post-apocalyptic landscape eating tires, after humanity has been extinguished?

    Egan’s hatred of the free market really can’t be read as anything other than hatred of freedoms.

    I stopped reading David Brin after I met him and he railed for about 40 minutes on the topic of how evil libertarians are.

    Egan was the second author to join that “not as smart as I once thought they were” list.

    • Dremora

      What delivers this billionaire the best standard of living and the most ego gratification?

      Step 1: Amass the resources to create a lot of computing power, energy independence, autonomous robotic equipment and the production capability of more computing power, in a location somewhat removed from the rest of civilization (including nulear power plants).
      Step 2: Research uploading technology and become the first to upload.
      Step 3: Create very many digital copies of yourself, using your local independent resources and robotic equipment.
      Step 4: Design a killer virus to exterminate all biological humans and unleash it.
      Step 5: Deal with the collapse of civilization and survive digitally with your amassed resources.
      Step 6: Use your productive capabilities to absorb the remnants of civilization and convert them to more computing power for you.
      Step 7: Repopulate the world with only copies of yourself (and some docile VR characters for entertainment).
      Step 8: Negotiate your personal idiosyncrasies to become a copy-cluster singleton that’s permanently stable.
      Step 9: Absorb additional resources from space (or stay content with what you’ve got on earth).

      Hm. I’m sure there are some people who would want to do that.

      • jva

        People that absorb huge amount of resources (I believe it is called “creating wealth” around here) are also the people that will absorb more resources (create wealth) given the opportunity. Seems consistent with the apocalyptic scenario.

    • notassmartasIoncethought

      I may be years late, but for the sake of posterity, I have to point out that TJIC is referring to a (false) theory very briefly considered by a character in a book by Greg Egan. I can hardly believe that he read about a conspiracy theory briefly considered by a character in a fictional book which contained such a wide range of conflicting ideas and immediately took this conspiracy theory to be a reflection of the actual beliefs of the author of the book. That he immediately stopped reading out of distaste for the imagined personal beliefs of the author and then posted here a complete misrepresentation of the plot of the book which he did not read is even more beyond belief. It appears, however, that this is exactly what he did.

  • Doug

    Forget about him Robin. I don’t know anything about this mediocre writer in a mediocre genre, but I know his type.

    Second rate thinkers like himself are incapable of actually grappling with radically new and innovative ideas. So instead they score easy points by making fun of their thinkers’ supposed strangeness. “Oh my God, look at this freak. He actually talks about freezing his head when he dies. How weird is that?!”

    This ingratiates them with the lazy-minded mainstream cretins who read the glorified ad hominem and feel justified ignoring said new (and weird) ideas because some medium status person made fun of the thinker. It’s the “feel good” version of speculative fiction. It allows someone to imagine radically different times and places, but wrap up the book feeling completely smug with all of the pre-conceived notions and beliefs that the reader started with.

    People like Egan are little more than the grownup version of the marginally popular high school kid making fun of the intelligent nerd. They substantially retard the progress of humanity by whoring themselves out to the middle man’s status quo bias.

    If on the off chance Egan does read this, I hope he gets this message: You may feel good now in your smug little world of second-rate literary cocktail parties (Tom Wolfe you ain’t) and shallow-minded self-assured mainstream liberalism. But 1000 years from now Robin Hanson has one of the highest probabilities of anyone alive today as being remembered as a great and prophetic thinker. Whereas in less than a couple of decades you’ll assuredly be forgotten as just another blip in a sea of mediocrity.

    • ShardPhoenix

      While from the sound of it this particular book is loaded with strawmen (and as a result I’ve avoided it), when it comes to Egan in general you’re really betraying your ignorance here. Try reading Permutation City sometime – it’s not exactly what someone who likes to mock weird ideas would write.

    • Robin C

      You should have stopped at, “I don’t know anything about this”.

    • John

      If your transhumanist dream comes true (I hope it doesn’t), in 1000 years no one will give a shit about what any human has achieved. In any case, no one will care more than we care about which chimpanzee managed to peel a banana using an oddly shaped rock.

    • Hul-Gil

      I don’t know anything about this author either, and others appear to disagree with your assessment of him; but whether or not Egan is as you describe, I think this is a fairly prevalent problem in science fiction, and fiction in general.

      Fantasy, for example, is full of “transhumanistic” ideas, though the obvious difference is that transcendence would be achieved by magic in this case; but I’ve yet to read a single fantasy novel wherein wanting to become more than human is considered a *good* thing. The Aesop, to borrow a term from TVTropes, is always that Immortality is Bad; Power is Bad; Godhood is Only Desired by Murderous Villains and the Life of a Simple Farmer is Always Enough for Any True Hero (or GODMVLSFAEATH for short).

      It’s very frustrating. “It allows someone to imagine radically different times and places, but wrap up the book feeling completely smug with all of the pre-conceived notions and beliefs that the reader started with.” This quote sums it up very well; I was going to wonder “aloud” why the “Strange is Bad” moral appears so damn often, but I guess this explains it. “Hey, you’re right in your comfortable old beliefs – and you don’t have to worry yourself about the possibility of becoming something more, because you wouldn’t want to anyway!”

      • Hul-Gil

        BTW – I’m about to begin trying to get publishers interested in my own novel, about novel ideas (ha ha) and transhuman possibilities; I intended from the beginning to challenge the accepted wisdom of the desirability, or lack thereof, of remaining a plain old-fashioned non-augmented human. I wonder if there will be difficulty here. Assuming my writing is of acceptable quality, will the departure from the norm be enticing, or off-putting? I guess we’ll see. (I may end up just self-publishing for the Kindle. Those books tend to be awful, but it’s a start.)

  • Mark Plus

    It’s very important to me that I’m the first transcendent being in this stellar system. I can’t risk having to compete with another resource-hungry entity; I have personal plans that require at least one Jovian mass of computronium.

    I joked about 20 years ago that you would have to deal with the problem of the “evil posthuman” by becoming the first one.

  • mjgeddes

    So is Nate Caplan inspired by me, by my famously libertarian colleague Bryan Caplan, or by Eliezer Yudkowsky, who was my co-blogger back when Egan wrote this book?

    Well the clue might be here:

    That is Nate Caplan, a self-centered arrogant rich American male nerd

    I’m going to take a wild stab in the dark here…. Eliezer Yudkowsky? 😉

    although this bit…

    It’s very important to me that I’m the first transcendent being in this stellar system. I can’t risk having to compete with another resource-hungry entity; I have personal plans that require at least one Jovian mass of computronium.

    sounds like me, that’s my friggin thoughts exactly

    I like the hacker army sub-plot, I had similar ideas. The Feds have been quite effective at tracking and arresting anon members in real-life though, which suggests that a genuine hacker revolution/conspiracy is probably quite hard to pull off in real-life. (disclaimer: Any past posts of mine where I mentioned hacking on Robin’s blog were jokes/sci-fi plots of course, my ideas were for a science-fiction book I was writing).

    Doug and TJIC are you sure Egan is not the smart one and it is you that need to revise your views? Were you aware that among all high-IQ rationalists, Libertarianism is a minority position? (Refs: LW survey, Chalmers philosophy survey) I don’t doubt Egan is right about Libertarianism, if Robin’s em vision ever looked likely, you’d have world-wide revolt.

    • Ilya Shpitser

      Did you take a survey of high IQ rationalists? Where was the wonderland where you drew your samples located? 🙂

      • mjgeddes

        “The PhilPapers Survey was a survey of professional philosophers and others on their philosophical views, carried out in November 2009. The Survey was taken by 3226 respondents, including 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students.”

        http://philpapers.org/surveys/

        Politics: Accept or lean toward: libertarianism 92 / 931 (>9.8%)

        “1090 people who took the second Less Wrong Census/Survey.”

        http://lesswrong.com/lw/8p4/2011_survey_results/

        Politics: Libertarianism: >32.3%

      • John Thacker

        @mjgeddes:

        (Unlike the first), the second survey would seem to offer a very strong presumption in favor of political libertarianism, at least according to Bayes’s Theorem.

        The first survey also provides one of the rare examples where people who were politically libertarian were more likely to be theist than the remainder of the population surveyed. (p < 0.08) That's a joint result of the correlations between politically libertarian and philosophically libertarian, and philosophically libertarian and theism.

      • Poelmo

        If we confine out attention to scientists and engineers I can tell you for a fact that most don’t support libertarianism. Simply because libertarians are so incredibly rare outside the United States.

        What we can say is that scientists and engineers are often attracted to unconventional ideologies that are very axiomatic and simple. In the United States this manifests as support for libertarianism, in Europe it manifests as support for communism. When I say often I mean more often than the general population, not necessarily a majority or even a plurality. I find (software) engineers are especially staunch and dogmatic in their fringe beliefs (they are also more prone to superstition) because they are not as prone as scientists to continue questioning all beliefs, including their own or those of their idols, so they don’t spot fallacies and gaps as quickly.

        I myself (BSc in physics) also hold fringe beliefs because I would like to see modern day capitalism to evolve into economic democracy asap, stay that way for a few decades while technology advances and humanity becomes more united and then move onto energy accounting (under a liberal democracy). I try not to be too dogmatic about this and am welcome to suggestions that could improve and/or hasten economic democracy and eventually energy accounting, even if the solutions come from other ideologies (like making the EC system semi-flat to reward people doing the dirty jobs or handing out energy credits from a fixed “business pool” to independent, competing non-profit manufacturers based on their “sales” in the previous cycle). But in the end it’s possible I’m one of those fringe technogeeks after all, obviously I can’t judge myself on that.

  • roo

    i’m glad i wasn’t the only one whose mind immediately went to gmu professors when he was introduced. no matter what egan’s intentions were, the caplan character mostly ended up being a stylized kurzweil. i haven’t read egan’s earlier and better-acclaimed work, but zengedi was not an impressive book.

  • http://kim.oyhus.no Kim Øyhus

    The genius Greg Egan is gone, and I miss him.
    What we have now, is Greg Egan, the vegetarian.

    • Psy-Kosh

      Worry not, the genius Egan was merely temporarily missing. Go read The Clockwork Rocket (premise: what if spacetime was riemanian instead of pseudo riemanian, ie, what if ds^2 = dx^2 + dy^2 + dz^2 + dt^2 instead of ds^2 = dx^2 + dy^2 + dz^2 – dt^2) (and for more fun, go read the supplementary material he wrote online, where he pretty much has written a physics textbook for such a universe, where time is fully symmetric with space)

  • NAME REDACTED

    Yah, anti-free-market scifi authors are everywhere. Anyone have any suggestions for some recent scifi that isn’t anti-free-market? I have been having trouble finding it. Everything either seems to be anti-technology-anti-freedom screeds, retreads of old scifi books, or military scifi.

    • ADS

      Authors basically take the position of god-like central planner, and sf authors doubly so (since they take it upon themselves to not just simulate a world in their head, but plausible future development as well).

      It’s not that surprising that someone like that isn’t much in favor of the idea that central planning is fundamentally useless through information asymmetry. It’s not an idea that’s very compatible with sf authors having anything useful to say.

      I’ve also never read an sf author that has had characters that are much like real humans. Typically guys like Charles Stross have the protagonists be like people like to claim they are, rather than what people actually are like. That is, it’s very rare that the heroes are hypocritical, have conflicting goals with their friends and allies, behave cad-like, cowardly or much of anything like real people actually behave.

      Personally I see those as fairly typical traits of aspie, arrogant, high IQ nerds that maintain their inflated self-image by never competing in any area where they can be conclusively beaten.

      • Tyrrell McAllister

        …never competing in any area where they can be conclusively beaten.

        Isn’t that just being smart?

        But, back to the original question, John C. Wright is a very pro-free market SF author.

      • roystgnr

        SF authors might just be more aware than others of one of Niven’s Laws: “There is a technical, literary term for those who mistake the opinions and beliefs of characters in a novel for those of the author. The term is ‘idiot.'”

        When every flaw in your protagonists ends up being paraded around by idiot critics as a flaw in yourself, you’ve got a strong incentive to avoid deliberately writing flawed protagonists.

      • ADS

        “Isn’t that just being smart?”

        Nope, since for effectively everyone that means retreating to a tiny little niche nobody else cares about, in order to preserve your ego. It also leaves a ton of improvement both in terms of accepting some risk of loss in return for greater pay-offs, as well as a lack of character development from not always being right or best.

        Think about someone having a moderate talent in whatever area and then chosing to only spend time with others less talented in order to shine. That guy isn’t going to develop anywhere near his maximum potential, he’s going to have a severly inflated sense of self-worth and is going to be lacking in all sorts of experiences that teach people that their opinions really aren’t particularly infailable.

        I’d consider Charles Stross and David Brin pretty good examples of moderately bright guys, convinced of their own genius in all sorts of areas where they lack both education and experience.

        “SF authors might just be more aware than others of one of Niven’s Laws”

        SF guys seldom strike me as socially astute. And most of them really do use their characters as puppets for themselves and the strawmen they fight.

        And no, the part about flawed characters being a problem is repudiated by virtually every other genre of fiction where authors do just fine with more recognizably human characters. I really just doubt that they could write characters as well as authors of other genres.

    • http://tjic.com TJIC

      > Anyone have any suggestions for some recent scifi that isn’t anti-free-market?

      Well, I spent 2011 writing the first draft of my anarcho-capitalist / radically decentralized / futures-market-using / uplifted Dogs / AI novel, and I’m spending 2012 revising it.

      It’s not done yet, but when it is I’ll announce it here http://morlockpublishing.com/

    • http://hertzlinger.blogspot.com Joseph Hertzlinger

      I regret to say some SF fans still haven’t given up the dream of a benevolent planned society under the benign guidance of SF fandom. (This explains the popularity of stories featuring the Second Foundation, Lensmen, or the Psychology Sevice.)

    • Evan

      Yah, anti-free-market scifi authors are everywhere. Anyone have any suggestions for some recent scifi that isn’t anti-free-market? I have been having trouble finding it.

      What? Science fiction might have some anti-libertarian authors, but it is by far the most libertarian-leaning genre of literature. How many other genres have something like the the Prometheus Awards? There are tons of libertarian sf writers and a good amount of sf featuring anarcho-capitalist and minarchist utopias.

  • Jan

    He seems to have a species purity obsession, disgusted by any making of non-humans from humans.

    Pretty unlikely. The heroes turn into ems in Permutation City and into transhumans, robots and aliens in Diaspora. And that’s not counting the short stories.

  • Alexander Kruel

    I thought ‘The Clockwork Rocket (Orthogonal)’ was his latest novel?

    Also see here for more about Greg Egan.

  • John Salter

    Was the Kindle version only a penny for a limited time? It shows up as five hundred times that on my Kindle (I’m in the UK, if that matters?).

  • Jamie_NYC

    Wait, the (beautiful) female scientist’s name is Nasim (or Nassim)?? I had to stop reading right there; now I can’t get the picture of Nassim Taleb out of my head…

  • Robert Koslover

    Robin, have you considered writing science fiction?

  • Poelmo

    The Martin copy obviously passed some kind of Turing test and therefore it’s entirely rational to consider him and others like him as persons with human rights. Using them as a labor force with no control over their destinies would simply be slavery. Yes, we sometimes treat intelligent animals as slaves, but who says the characters in the novel, or the author, are in favor of that? I certainly am not.

    I didn’t get the impression the author opposes transhumanism, he just opposes rationing it based on who inherited a trust fund (or was really good at playing the Wall Street casino to make money without actually creating wealth/value) and who didn’t. That’s an airtight moral stance, the only people who disagree with it are people who are rich or expect to be rich when the time comes. Nate Caplan is mocked and derided because he doesn’t want to share the technology with others. He doesn’t really want to progress science or better the human condition, he just wants a way to become the richest, most powerful man in the world and I would think it’s pretty obvious he’s the kind of man you would least entrust with the powers of transhumanism, let alone letting him be the first person (and one of few) to receive the powers.

    • roystgnr

      If anything, transhumanism strengthens the reasons for wanting to prevent wealth redistribution beyond voluntary giving to children and charity. “We decide to have one kid, you have four kids over the course of your lifetime, so fairness demands that our kid gets 20% of your family’s wealth and your kids get 80% of ours” is already loaded with perverse incentives. Replace “four kids over the course of your lifetime” with “four thousand copies over the course of five minutes” and even the slow kids in the class would stop demanding that every quintile be equal.

      • Poelmo

        Could you please rephrase that? I’m not sure what you are trying to say here.

      • ADS

        Your argument is that it’s immoral for wealth to be determined by being lucky in the birth lottery*.

        His argument is that that argument becomes absurd in a world where one transhuman decides to create 4000 descendants and another only creates 2, and then transfer the same amount of resources to divided equally by their descendants.

        Since that would basically mean that you would have think it would be moral to transfer the wealth from the undeservedly rich 2 descendants to the undeservedly poor 4000 descendants.

        * It’s also pretty annoying that you’re assuming the validity of an ethical position that’s far from universally accepted as a “morally airtight stance”.

      • Poelmo

        I was purely talking about the affordability of transhuman “upgrades”. Not about welfare for copies (who would contribute to the economy themselves anyway). “The ability to “transcend” should not be determined by wealth” was the airtight mora stance. This is about the first part of this blog post, not about the copies in the second part.

  • Jay

    Personally I like Michael Swanwick’s future fiction. He posits superintelligent AI that got really, really sick of being asked to route porn and cat images. They want to kill us, painfully. Most of it was shut down long ago, but there’s always the danger that something whiling away the centuries in sleep mode will accidentally be reactivated.

    When this happens, whatever you do, don’t ask it a question. It is sick of our questions.

  • Pingback: Overcoming Bias : Rights Limit Options