Perfect Bits

Did you know that your phone, pad, and laptop are all “computers” wherein all relevant info is stored in “bits”? And did you further know that you can get tools to let you very easily change almost any of those bits? Since you can change most any bits in these devices, you need never tolerate any imperfections in anything that results from those bits. Thus you should never see any disagreeable screen or menu or feature or outcome in any app in any of those systems for more than a short moment. Same for books, music, and movies. After all, as soon as you notice any imperfection, why you’ll open your tool, change the bad bits, and abra cadabra, the system will be perfect again. Right?

In Mind Uploading Will Replace the Need for Religion, “Award-winning #1 Bestseller Philosophy & Sci-Fi Visionary” and Transhumanist Party presidential candidate Zoltan Istvan applies the same penetrating insight to future ems:

Being able to upload our entire minds into a computer is probably just 25-35 years off. … As people begin uploading themselves, they’ll also be hacking and writing improved code for their new digital selves. … This influx of better code will eliminate … stupidity and social evil. …

In the future, we may all have avatars—perfectly uploaded versions of ourselves … [who] will help guide us and not allow us to do dumb or terrible things. … Someone trustworthy will always be in our head, advising us of the best path to take. …

This is why the future will be far better than it is now. In the coming digital world, we may be perfect, or very close to it. Expect a much more utopian society for whatever social structures end up existing in virtual reality and cyberspace. But also expect the real world to radically improve. Expect the drug user to have their addictions corrected or overcome. Expect the domestic abuser to have their violence and drive for power diminished. Expect the mentally depressed to become happy. And finally, expect the need for religion to disappear as a real-life god—our near perfect moral selves—symbiotically commune with us. (more)

Well there’s a few complications. Humans don’t always take advice they are given. And since brains were designed by evolution, we expect their code to be harder to read and usefully change than the device app code written by humans. But surely those are only small bumps on our short 35 year road to utopia. Right?

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • J.K.

    “This influx of better code will eliminate … stupidity and social evil”

    Which suggests a total lack of understanding of where social evil comes from. If anything, we will just become better at hiding social evil behaviors. this will in turn lead to better detection, then better hiding, and then we are off to the Red Queen races once again!

    Technology does not make people inherently better.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      Would you care to inform us regarding the source of social evil?

      • Vitalik Buterin

        I presume that the parent is talking about the standard individual interest vs collective interest dichotomy a la prisoner’s dilemma, etc. If we can rewrite our code, we will do so in order to optimize for our individual objectives, and the result may well end up being a hugely negative-sum competitive game.

  • http://www.gwern.net/ gwern

    > Software is under a constant tension. Being symbolic it is arbitrarily perfectible; but also it is arbitrarily changeable.

    Here are some of the examples you criticize as naive:

    > Expect the drug user to have their addictions corrected or overcome. Expect the domestic abuser to have their violence and drive for power diminished. Expect the mentally depressed to become happy.

    What pieces of software are used commonly which are as deeply dysfunctional as is someone with drug addiction? Or major depressive disorder?

    I have yet to see my laptop be bricked by software ‘overdosing’ or suddenly deciding to end it all, or find a robot physically assaulting me.

    • Arthur B.

      That sounds like a setup for someone to say “Windows?”

  • Zoltan Istvan

    Thanks for featuring my article and its ideas!

    • Robert Koslover

      Channeling Tommy Smothers, perhaps? As Dick Smothers would often say in response, “that was not a compliment!”

      • Anonymous

        Hey, no such thing as bad publicity eh?

      • Zoltan Istvan

        Criticism is a part of life. I’m just an atheist writing atheist futurist ideas. They can be taken for whatever they’re worth, and I hope the add something to the overall conversation of the future. Thanks.

      • truth_machine

        People who won’t defend their ideas against criticism aren’t worth listening to.

    • Ben Albert Pace

      Would you care to respond to the criticism? Or change your mind?

  • Tom

    I’m reminded of an anecdote:

    A man was driving along when he noticed a strange noise coming from the engine of his car. He took it to a mechanic, explained the problem, and asked what could be done. The mechanic poked around for a few minutes, then took out a hammer and hit the engine. The noise stopped. The man asked how much he owed. The mechanic said it came out to $220. The man was shocked.

    “But all you did was hit the engine with a hammer,” he protested.

    “$20 to hit the engine, and $200 to know where to hit it,” the mechanic replied.

    The point is that while it may be easy to change bits, the really hard problem is to know which bits to change and what effect the change will have. Major software projects that aren’t riddled with bugs are as rare as unicorns. And those are written at a much higher level than mere bits.

  • DanielHaggard

    Even non-human app code is extremely difficult and time consuming to write. This is why most people don’t bother with tweaking their software. They by and large use locked down proprietary software that constrains their choices because it’s good for them. Look at the market share of desktop linux if you want an example.

    In fact – software is so hard to write that we invariably put up with all sort of nasty near-far trade offs in order to use it. We accept NSA software backdoors into our most private lives because of the effort involved in sitting down and writing the code ourselves.

    Not to mention the general fear that many people have using technology. I’ve seen trained scientists stand terrified in front of a 20 year old fax machine – as though trying to figure out the best way to feed a worming pill to a lion. When they ask for assistance they explain that they “didn’t want to break it” – as though the designers of fax machines thought it would be beneficial to include a “destroy the fax” button. And I’m talking about scientists here – people who are supposed to be expert in isolating variables and figuring out how things work. A simple fax machine regresses them back to the problem solving skills of an infant.

    Of course – lots of software does come with a self-destruct button – there will be horror stories of 4channers running around telling people to execute the brain code equivalent of “rm -rf /”. The little oompa-loompa sized em robot will collapse on the ground before it even has a chance to think “oops”.

    As Robin Hanson points out – the code for humans will be immeasurably more complex. So if this em thing flies – expect even more centralisation of control of the various interfaces we need to go about our daily lives.

    I wouldn’t worry about humans not taking the advice of their avatars though. If you want that avatar – and you will so as to be able to perform as well as your peers – you’ll accept the fine print that allows the government/corporation to implement various protocols that punish you when you don’t follow instructions. You probably won’t even know what is going on exactly – but all you know is that when you stopped following your avatar’s advice all your friends started to distance themselves from you and socially isolate you. Your avatar – with a kind smile will gently remind you that you should have followed her advice – that it’s because you didn’t listen that now everyone hates you. She won’t mention that the moment you failed to follow instructions, the avatars of everyone around you were subtly instructed to start distancing themselves from you.

    Sound far-fetched? Facebook has already experimented with manipulating moods by isolating users from the responses of their peers; research led, mind you, by scientists with ties to the Minerva Research Institute, which is looking at ways of preventing civil disobedience.

    The powers that be won’t even see it as unethical. In fact – they’ll argue that it would have been unethical NOT to advise people to avoid the person who is disobeying their avatar. Because someone who doesn’t comply is clearly a negative prospect to those around them.

    I speak as someone who generally believes that if a societal problem can be fixed, then it’s largely technology that is the enabling condition. I was writing “everyone should learn to code” blog posts long before that movement became a meme. So – I love tech and believe in it.

    But – short of government programs that mandate compulsory programming excellence in all people (and maybe not even then, given the pace that skills become redundant in that space) – I don’t see the majority of people tinkering with their own brain code.

  • Sam Dangremond

    “Expect the drug user to have their addictions corrected or overcome. Expect the domestic abuser to have their violence and drive for power diminished. Expect the mentally depressed to become happy.”

    Some of these things sound less voluntary than the others.

    After all, what if my Em doesn’t WANT to get more “perfect” in that way?

    • IMASBA

      My thoughts exactly. These particulars sound more like one personality type wanting to conquer the others (without thinking of the consequences further down the line). Solving a dispute through violence is not less ethical than solving it through a chess match or a popularity contest, though of course people who do well in chess matches or popularity contests like to make everyone believe that those ways are more ethical…

  • IMASBA

    Without stringent regulation the ability to “reprogram” EMs is more likely to result in drones who are happy performing slave labor 24/7 (the work giving making them feel like they’re on virtual cocaine).

    Zoltan Istvan sounds more like a child than someone who has a deep understanding of minds.

    To stop people doing things we consider “stupid” when we’re in far mode you have to stop people enjoying those things when in near mode*. But the thing is that when you start tinkering at that level the subject’s idea of “utopia” changes as well. It may be far easier to change what people think “utopia” is than to remake the world to whatever it is most current humans think utopia is. In fact changing people’s idea of what utopia is will most likely be a side effect of trying to make people more like what our far mode selfs think the ideal human is. In the end people might end up living in utopia, but it probably won’t be exactly the same as what we think utopia is. I think that shouldn’t worry us too much, as long as those future people are happy, but it does mena that trying to create our current vision of utopia through the reprogramming of minds is a pipe dream and self-contradictory.

    * and sometimes you have to offer them an alternative (which won’t always exist) because getting rid of that particular impulse completely would sometimes mean getting rid of essential parts of the mind such as curiosity and survival instinct.

    • Tom

      I suddenly had a vision of future parents reprogramming their children to make them more docile and obedient. Like a permanent version of Ritalin.

      People are lazy and like the quick fix, so we should expect to see plenty of half-baked solutions being implemented on a wide scale.

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      it does mean that trying to create our current vision of utopia through the reprogramming of minds is a pipe dream and self-contradictory.

      But this is true of any important attempt to achieve perfection in any broad endeavor: your view of the goal will change as you progress. This doesn’t make the effort unavailing–nor dispensable.

      • IMASBA

        It’s not the same thing. With mind re-programming you could announce you’re going to put a man on the moon and then make everyone believe that Neil Armstrong’s backyard IS the moon, or you could make them all believe that you realy said you were going to put a man in a backyard and that this is a monumental achievement.

      • IMASBA

        … you could get people to stop caring about monumental achievements.

        Hell, you could make a person believe he’s a potato.

        It’s THE game changer: changing human nature itself.

  • Hedonic Treader

    We don’t tinker with software because it works well enough – but we want someone to write and improve it. We pay professionals for that.

    We also pay neurologists and psychiatrists to help us. In the em era, these experts will have far more powerful tools available: Reading brain states, saving backups, and undo function for mishaps, specific area and function targeting. The result will be phenomenal. Just look what this has done for software engineering.

    (The utopianism is ill-adviced though; there are more than enough dystopian threats here)

  • IMASBA

    Concerning the mental depressed (which I’m assuming is only an example of an expectation that all mental “illness” will be “cured”) it reminds of a Star Trek episode* (original series) where they visited a remote planet where the last few remaining mental patients were locked up. This was of course a ridiculous notion: mentall illnesses cannot be objectively defined (you’d need a scientific definition of where “normal” behavior stops**, which is like asking for a scientific definition of which lucky number is the best one or which flower is the most beautiful, in other words completely artificial concepts that the universe doesn’t care about). I think this is symptomatic of utopian thinkers who may know a lot about technology but very little about the universe.

    * When the far less utopian Star Trek DS9 came out (one of its writers would later become the showrunner of Battlestar Galactica) we started seeing 24th century psychotic patients, (violent) criminals and amoral politicians and spies on the screen.

    ** The modern psychiatric system in Western countries heavily relies on self-admission (many people who would qualify do not seek psychiatric treatment and the system doesn’t actively search them out); if we were to pro-actively search for people fitting the official criteria (like one could do with EMs) it wouldn’t be long before virtually the entire population was being treated for something. Even with self-admission already 13% of adult Americans are on prescribed antidepressants.

  • Cambias

    Why are atheists so hot to create angels and gods?

    • TezlaKoil

      Atheists don’t believe that the theist-proposed implementation of a god works. However, the concept is not inherently bad.

    • dxu

      “If God did not exist, it would be necessary for someone to invent Him.”

      –paraphrased from Voltaire

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      They seem to go along with immortality, the real prize.

      Consider that Robin Hanson and Luke Muehlhauser were both raised as devout Christians. Presumably, they believed in immortality.

      We hear of their intellectual struggles but not about how they managed to relinquish eternal life. I would imagine its very hard. Personally, I doubt I would be able to give up an illusion as satisfying as that.

  • Curt Adams

    Droll. Actually, my expectation for self-modifying EMs/AIs is even more negative. As anybody who works with software knows, bugs are ubiquitous and often quite nasty. I expect any generalized self-modifying EM/AI will at some point modify itself with a bug that destroys its ability to self-modify correctly and will rapidly either effectively destroy itself or (if lucky) eliminate its ability to self-modify. Backups can help but only if the software choosing to do the restore can’t be modified, else it’s part of the loop.

  • Sigivald

    Since you can change most any bits in these devices, you need never
    tolerate any imperfections in anything that results from those bits.

    Speaking as a programmer, this is the most hilarious thing I’ve seen all day.

    I’m motivated to fix bugs at work because they pay me.

    Fixing minor imperfections in device I don’t have source code for and don’t know the APIs and codebase for? (Because none of my devices are e.g. Linux, because I don’t hate myself.)

    Technically possible, sort of, sure.

    Practically speaking, oh, God, no, not a chance.

    I don’t expect modifying AI to be easier than that; rather much, much harder.

  • http://juridicalcoherence.blogspot.com/ Stephen Diamond

    Even if he takes a one-sided view of the matter, surely Zoltan is correct that brain downloading would open the question of modifying human nature by the direct manipulation of code. Ridiculing his utopianism shouldn’t substitute for directly examining the question. The alternative is Robin’s dystopianism, which brackets radical changes in human psychology.

    Modification of human nature might be expected to be the most important issue fostering centralism in an em scenario.

    • dxu

      > modifying human nature by the direct manipulation of code

      You’d have to figure out a way to translate neural patterns to code first.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        Isn’t that translation the basic premise of brain downloading?

      • IMASBA

        Yes, but in a black box manner (except for the connections to the peripheral nervous system and the senses, but we’ve sort of already mapped those sufficiently).

        It would be sort of like working with software that you do not have the source code of: you can copy it easily but you need to disassemble it to learn how the code works internally (modern neuroscience is basically attempting that disassembly). Robin believes the human mapping of the human mind with sufficient resolution and understanding of the time-evolution of its patterns will be completed before the disassembly or from-the-ground-up creation of an equivalent system will be completed.

  • Anonymous

    Can’t be worse than Windows Vista, though

  • advancedatheist

    What would transhumanism look like if transhumanists realized that they had to give up on this “uploading” nonsense because The Mind Doesn’t Work That Way?

    I guess they would have to reset the idea to the pre-uploading form of transhumanism that existed in the 1970’s, as exemplified by Robert Ettinger’s book, Man Into Superman:

    http://www.cryonics.org/images/uploads/misc/ManIntoSuperman.pdf

    • http://juridicalcoherence.blogspot.com/ Stephen Diamond

      The mind doesn’t work that way is the title of a book by philosopher Jerry Fodor criticizing the modular view of mind.

      I don’t know that brain downloading depends on modularity, but it does seem to depend on a philosophical doctrine that’s been subject to considerable challenge in the internalism versus externalism debate.

    • Philo

      In what “way” is Hanson assuming the mind works? His assumption, whatever exactly it is, seems minimal. He does admit that it might be hard to read a person’s “code” (which, by the way, changes over time); I suspect that for the foreseeable future it will be *too* hard. And I have not seem him explain where he is drawing the line between *the mind* and *(the rest of) the body*.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        In what “way” is Hanson assuming the mind works?

        What assumptions are implied by the claim that the mind can be imaged by reducing the relevant brain to information? Is there any way the brain could be that would preclude downloading? I don’t know, and I hope Robin discusses this in his book.

        But here’s a stab: mind downloading assumes that mental states are “in the head.” If the information we use is created only by interaction between causal processes in the brain and the (expected) external world, then any attempt to map the information in the brain might involve far greater complexity than anticipated.

        The assumption, in other words, is that it is possible to image the material substratum of concepts in the same manner as percepts, which has been (as IMBASA puts it) essentially successful. Concepts are often thought contextual in a way qualitatively different from percepts.

  • Careful out there!

    1. Hillary 2080!
    2. It is essential that governments be dismantled by that time. France asked google to disclose their search engine algo and Any government could conceivably steal your intellectual property at will or pull the plug on you

  • Grant

    It seems to me the best arguments for a tech-utopia is Coasian in nature. When you view the problem of externality as being caused by transaction costs, the road to utopia seems within our grasp.

  • Evan Gaensbauer

    Sidenote, but quite relevant: I’ll have the opportunity to see Mr. Istvan speak in person two weeks from now. I’m willing to field questions from anyone here, directed to Mr. Istvan, based on material either covered in his most recent Motherboard article, or his other works regarding transhumanism. I’ll respond with his answers to whatever questions (as I recall them) not in the comments here, but in the next available open thread. Submit questions to me as a reply to this comment. Thanks!

  • Rafal Smigrodzki

    Istvan seems to think that religion is just a random affliction that happens to weaker minds (I remember thinking this way when I was 12). However, the ubiquity of religion despite the existence of some atheists favors the notion that under the social evolutionary conditions prevalent over the last 30,000 years or so it had a fitness-enhancing effect – otherwise there would be many more atheists around. It may be that during evolution in the computational substrate populated by designed minds it may no longer be a fitness-enhancer – but this is a complex and hard-to-model issue, not something to be dealt with by silly talking about “a real-life god – our near perfect moral selves”.

    If stupidity and social evil have a fitness payoff, they *will* have a brilliant future.

  • Pingback: The Other Guy Didn’t Win; You Just Failed to Convince People | Justin Kownacki