Tag Archives: Ems

“I Robot, You Unemployed”

Tomorrow (Wednesday) at 7pm EST I’ll do a Learn Liberty Live! web presentation on “I, Robot. You, Unemployed” here. After a short ten minute presentation, I’ll lead ninety minutes of discussion. I expect to focus on em econ.

GD Star Rating
loading...
Tagged as: ,

Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:

SoftwareIntensity

In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
loading...
Tagged as: , ,

Em Software Engineering Bleg

Many software engineers read this blog, and I’d love to include a section on software engineering in my book on ems. But as my software engineering expertise is limited, I ask you, dear software engineer readers, for help.

“Ems” are future brain emulations. I’m writing a book on em social implications. Ems would substitute for human workers, and once ems were common ems would do almost all work, including software engineering. What I seek are reasonable guesses on the tools and patterns of work of em software engineers – how their tools and work patterns would differ from those today, and how those would vary with time and along some key dimensions.

Here are some reasonable premises to work from:

  1. Software would be a bigger part of the economy, and a bigger industry overall. So it could support more specialization and pay more fixed costs.
  2. Progress would have been made in the design of tools, languages, hardware, etc. But there’d still be far to go to automate all tasks; more income would still go to rent ems than to rent other software.
  3. After an initial transition where em wages fall greatly relative to human wages, em hardware costs would thereafter fall about as fast as non-em computer hardware costs. So the relative cost to rent ems and other computer hardware would stay about the same over time. This is in stark contrast to today when hardware costs fall fast relative to human wages.
  4. Hardware speed will not rise as fast as hardware costs fall. Thus the cost advantage of parallel software would continue to rise.
  5. Emulating brains is a much more parallel task than are most software tasks today.
  6. Ems would typically run about a thousand times human mind speed, but would vary over a wide range of speeds. Ems in software product development races would run much faster.
  7. It would be possible to save a copy of an em engineer who just wrote some software, a copy available to answer questions about it, or to modify it.
  8. Em software engineers could sketch out a software design, and then split into many temporary copies who each work on a different part of the design, and talk with each other to negotiate boundary issues. (I don’t assume one could merge the copies afterward.)
  9. Most ems are crammed into a few dense cities. Toward em city centers, computing hardware is more expensive, and maximum hardware speeds are lower. Away from city centers, there are longer communication delays.

Again, the key question is: how would em software tools and work patterns differ from today’s, and how would they vary with time, application, software engineer speed, and city location?

To give you an idea of the kind of conclusions one might be tempted to draw, here are some recent suggestions of François-René Rideau: Continue reading "Em Software Engineering Bleg" »

GD Star Rating
loading...
Tagged as: ,

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Paul Carr Interviews Me

In this episode of the Wow! Signal Podcast. The topic is ems, starting about minute 35, after an interview with Heath Rezabek.

GD Star Rating
loading...
Tagged as: ,

First Person Em Shooter

Jesse Galef:

It’s The Matrix meets Braid: a first-person shooter video game “where the time moves only when you move.” You can stare at the bullets streaking toward you as long as you like, but moving to dodge them causes the enemies and bullets to move forward in time as well. The game is called SUPERHOT … it struck me: this might be close to the experience of an emulated brain housed in a regular-sized body.

Jesse asked for my reaction. I said:

Even better would be to let the gamer change the rate at which game-time seems to move, to have a limited gamer-time budget to spend, and to give other non-human game characters a similar ability.

Jesse riffed:

It would be more consistent to add a “mental cycle” budget that ran down at a constant rate from the gamer’s external point of view. I don’t know about you, but I would buy that game! (Even if a multi-player mode would be impossible.)

Let’s consider this in more detail. There’d be two plausible scenarios:

Brain-In-Body Shooter – The em brain stays in a body. Here changing brain speeds would be accomplished by running the same processors faster or slower. In this case, assuming reversible computing hardware, the em brain computing cost for each subjective second would be linear in brain speed; the slower the world around you moved, the more you’d pay per gamer second. This would be an energy cost, to come out of the same energy budget you used to move your body, fire weapons, etc. There would also probably be a heat budget – you’d have some constant rate at which cooling fluids flow to remove heat, and the faster your mind ran the faster heat would accumulate to raise your temperature, and there’d be some limit to the temperature your hardware would tolerate. Being hot might make your body more visible to opponents. It would hard for a video game to model the fact that if your body is destroyed, you don’t remember what happened since your last backup.

Brain-At-Server Shooter – The em brain runs on a server and tele-operates a body. Here switching brain speeds would usually be accomplished by moving the brain to run on more or fewer processors at the server. In this case, em brain computing cost would be directly proportional to subjective seconds, though there may be a switching cost to pay each time you changed mental speeds. This cost would come out of a financial budget of money to pay the server. One might also perhaps allow server processors to temporarily speed up or slow down as with the brain-in-body shooter. There’d be a serious risk of opponents breaking one’s net connection between body and brain, but when your body is destroyed at least you’d remember everything up to that point.

To be able to switch back and forth between these modes, you’d need a very high bandwidth connection and time enough to use it lots, perhaps accomplished at a limited number of “hard line” connection points.

Not that I think shooter situations would be common in an em world. But If you want to make a realistic em shooter, these would be how.

GD Star Rating
loading...
Tagged as: ,

Em Econ @ Yale Thursday

The Yale Technology & Ethics study group hosts about one talk a month on various futurist topics. Amazingly, I was their very first speaker when the group started in 2002. And this Thursday I’ll return to talk on the same subject:

The Age of Em: Social Implications of Brain Emulations

4:15-6:15pm, May 22, Yale ISPS, 77 Prospect St (corner of Prospect & Trumbull), Rm A002.

The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in the next century. I attempt a broad synthesis of standard academic consensus, including in business and social science, in order to outline a baseline scenario set modestly far into a post-em-transition world. I consider computer architecture, energy use, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war.

My ’02 talk was controversial; Thursday’s talk will likely be well. All are welcome.

Added 28May: Audio, slides.

GD Star Rating
loading...
Tagged as: , ,

Factory+Files Future

The difficulty of practical interstellar travel is horrendously underestimated. … Known physics will never deposit living people on Earth-like planets around other stars. (more)

That was Donald Brownlee, who said something similar in our film. It occurs to me that skepticism about cryonics and interstellar travel have similar roots, and that understanding this is useful. So let me explain.

Imagine that one tried to take a rock, say this fossil:

Fossil

and put it somewhere on Earth so that it could be found in a million years. Or that one tried to throw this fossil rock so that it would pass close to a particular distant star in a million years. Few would claim that doing so is impossible. Most would accept that these are possible, even if we require that the rock (plus casing) remain largely unchanged, i.e., retain its shape and maybe even most of its embedded DNA snips.

So skepticism about making people last a long time via cryonics, or about getting people to distant stars, is mainly about how people differ from rocks. People are fragile biological systems than slowly degrade with time, and that can be easily disrupted by environmental disturbances. Which justifies some doubt on if the human body can survive long difficult paths in space-time.

So why am I more hopeful? Because there are (at least) two ways to ensure that a certain kind of object exists at certain destination in space-time. One way is to have an object of that kind exist at a prior point in space-time, and then move it from that prior point to the destination. The other way is to build the desired object at the destination. That is, have a spec file that describes the object, and have a factory at the destination follow that spec file to create the object. One factory can make many objects, factories and files can be lighter and hardier than other objects, and you might even be able to make all the particular factories you need from one smaller hardier general factory. Thus it can be much easier to get one factory+files to a distant destination than to get many desired objects there.

Yes, today we don’t have factories that can make humans from a spec file. But if our society continues to grow in size and abilities, it should be able to do the next best thing: make an android emulation of a human from a spec file. And we should be able to make a spec file from a frozen brain plus a generic spec file.

If so, a frozen brain will serve as a temporary spec file, and we will be able to send many people to distant stars by sending just one hardy factory there, and then transmitting lots of spec files. The ability to encode a person in a spec file will make it far easier to send a person to a wide range of places and times in the universe.

See David Brin’s novel Existence for an elaboration on the throwing rocks with files theme.

GD Star Rating
loading...
Tagged as: , ,

Help Me Imagine

For my book on em econ, I want to figure out something unusual about human psychology. It has to do with how creatures with a human psychology would react to a situation that humans have not yet encountered. So I ask for your help, dear readers. I’m going to describe a hypothetical situation, and I want you to imagine that you are in this situation, and then tell me how you’d feel about it. OK, here goes.

Imagine that you live and work in a tight-knit community. Imagine a commune, or a charity or firm where most everyone who works there also mostly socializes with others there. That is, your lovers, spouses, friends, co-workers, tennis partners, etc. are mostly all from the same group of fifty to a few hundred people. For concreteness you might imagine that this community provides maid and janitorial services. Or maybe instead it services and repairs a certain kind of equipment (like cars, computers, or washing machines).

Imagine that this community was very successful about five years ago. So successful in fact that one hundred exact copies of this community were made then and spread around the world. They copied all the same people, work and play roles and relationships, even all the workspaces and homes. Never mind how this was done, it was done. And with everyone’s permission. Each of these hundred copies of the community has a slightly different context in terms of its customer needs or geographic constraints on activities. But assume that these differences are small and minor.

OK, now the key question I want you to consider is your attitude toward the other copies of your group. On one hand, you might want distance. That is, you might want to have nothing to do with those other copies. You don’t want to see or hear about them, and you want everyone else in your group to do likewise. “Na na na, I can’t hear you,” to anyone who mentions them.

On the other hand, you might be eager to maximize your chances to share insights and learn from the other groups. Not only might you want to hear about workplace innovations, you might want to see stats on what happens between the other copies of you and your spouse. For example, you may want to know how many of them are still together, and what their fights have been about.

In fact, when it was cheap you might even go out of your way to synchronize with other groups. By making the groups more similar, you may increase the relevance of their actions for you. So you might try to coordinate changes to work organization, or to who lives with whom. You might even coordinate what movies you see when, or what you eat for dinner each day.

Of course it is possible to be too similar. You might not learn anything additional from an exact copy doing exactly the same things, except maybe that your actions aren’t random. But it also seems possible to be too different, at least for the purpose of learning useful things from other groups.

Notice that in tightly synchronized groups, personal relations would tend to become more like group relations. For example, if just a few copies of you did something crazy like run away, all the copies of your spouse might worry that their partners may soon also do that crazy thing. Or imagine that you stayed at a party late, and your spouse didn’t mind initially. But if your spouse then learned that most other copies of him or her were mad at copies of you for doing this, he or she might be tempted to get mad too. The group of all the copies of you would thus move in the direction of having a group relation with all of the copies of him or her.

Now clearly the scenario where all the other groups ignore each other is more like the world you live in now, a world you are comfortable with. So I ask you to imagine not so much what you now feel comfortable with, but how comfortable people would feel with if they grew up with this as normal. Imagine that people grew up in a culture where it was common to make copies of groups, and for each group to somewhat learn from and synchronize with the other groups.

In this case, just how much learning and synchronizing could people typically be comfortable with? What levels of synchronization would make for the most productive workers? The happiest people? How would this change with the number copies of the group? Or with years since the group copies were made? After all, right after the initial copying the groups would all be very synchronized. Would they immediately try hard to differentiate their group from others, or would they instead try to maintain synchronization for as long as possible?

GD Star Rating
loading...
Tagged as: ,

Computing Cost Floor Soon?

Anders Sandberg has posted a nice paper, Monte Carlo model of brain emulation development, wherein he develops a simple statistical model of when brain emulations [= “WBE”] would be feasible, if they will ever be feasible:

The cumulative probability gives 50% chance for WBE (if it ever arrives) before 2059, with
the 25% percentile in 2047 and the 75% percentile in 2074. WBE before 2030 looks very unlikely and only 10% likely before 2040.

My main complaint is that Sandberg assumes a functional form for the cost of computing vs. time that requires this cost to soon fall to an absolute floor, below which it will never fall, relative to the funding ever available for a brain emulation project. His resulting distribution has costs approaching this floor by about 2040:

SandbergTimingModel

As a result, Sandberg finds a big chance (how big he doesn’t say) that brain emulations will never be possible – for eons to follow it will always be cheaper to compute new mind states via floppy proteins in huge messy bio systems born in wombs, than to compute them via artificial devices made in factories.

That seems crazy implausible to me. I can see physical limits to physical parameters, and I can see the rate at which computing costs fall slowing down. But having the costs of artificial computing soon stop falling forever is much harder to see, especially with such costs remaining far higher than the costs of natural bio devices that seem pretty far from optimized. And having the amount of money available to fund a project never grow seems to say that economic growth will halt as well.

Even so, I applaud Sandberg for his efforts so far, and hope that his or others’ successor models will be more economically plausible. It is an important question, worthy of this and more attention.

GD Star Rating
loading...
Tagged as: ,