Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:

SoftwareIntensity

In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
loading...
Tagged as: , ,
Trackback URL:
  • Sigivald

    This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete.

    Forced…?

    I’m a programmer for a living, and I can’t say I’ve ever even heard of pressure to release “because otherwise better hardware will obsolete the software”.

    What “computer projects” do you envision where hardware improvement forced the project to never be undertaken?

    (What kind of computing project would be undermined rather than aided by improved* iron under it?

    Sure, your “ems” could work for lifetimes on Some Project.

    But what project even makes sense for that, and how in God’s name could you even begin to test it, or even integrate all the work?

    * Now, if the improvement was also a change in kind, this can happen, for sure. Code written for serial execution will be slowed down by a change in hardware to massive parallelism with slower individual processors, even if overall power expands by an order of magnitude or two.)

    • Dave Lindbergh

      I think what Robin means is that, because everyone expects hardware to get dramatically more capable, nobody (intentionally) invests in software projects that take many decades to complete.

      Any given hardware capability forces (that is, requires for even semi-optimal outcomes) software choices. If you had hardware that was 1,000,000x faster, you’d do the software in an entirely different way. Maybe using tools that rely on that fast hardware – tools that you can’t run now.

      So it would be foolish to build software now for that 1Mx machine that you don’t have yet – you don’t have the tools you’d want, and you can’t even test the code if you did manage to write it (because your hardware is 1Mx too slow).

      • Sigivald

        Okay, that makes more sense.

    • Ken Arromdee

      “What kind of computing project would be undermined rather than aided by improved* iron under it?”

      The software may need to be rewritten to compete against other programs that use the improved hardware. If the rewriting process is too slow, the hardware could improve fast enough that the software just keeps lagging farther and farther behind.

      I hear that Duke Nukem Forever had a version of this problem.

      • David Condon

        Duke Nukem Forever was designed on at least 3 different engines. Each time, this forced the developers to redo the majority of their work.

      • Sigivald

        For the most part, at least historically, “new hardware” hasn’t required new software.

        That’s why we write in high-level languages, not machine code.

        (You can take advantage of new features with new software, sometimes. But often it’s just automatic, or the changes are far less than a complete re-write.)

  • matt6666

    One thing about EMs I don’t buy is that they will be able to run faster than Human brains. I thought, and I might be wrong but that the tech behind EMs was scanning human brains and emulating them. Thus no reverse engineering needed. It’s not clear to me you can make a system faster that is emulated in that way. If you don’t understand the system you can’t understand the relationship between timing of different elements. Thus if the relationship isn’t linear, you can’t just run the simulation faster. The problem with a black box, is that it’s a black box. So EMs can’t run faster than people, and they are subject to the same constraints that other programmers are, i.e. adding more EMs to a project doesn’t make it quicker to complete.

    • Dave Lindbergh

      The relationship between elements doesn’t matter if you accelerate them all by the same amount.

      • http://juridicalcoherence.blogspot.com/ Stephen Diamond

        To accelerate all the elements by the same amount, you have to individuate the elements. The question seems to be whether you can read off the real elements from scan-based emulations.

      • M_1

        That also assumes everything continues to work the same way when accelerated. That isn’t necessarily true. Trivially speaking, race conditions could emerge, for example.

    • adrianratnapala

      It depends on the accuracy needed. You can certainly simulate neurons faster than real neurons go (although wiring up enough hardware might be a challange, it’s not insurmountable). Even if you want to solve some analogue partial differential equations for how signals and chemicals travel, that can probably be done faster than real time. Detailed biochemistry, not so much I would guess.

    • M_1

      The concept of running uploads at higher speeds has always struck me as intensely naive. Even if we made uploads and higher-than-normal speeds work right, I would expect this to cause immediate loss of sanity. I suspect folks like Hanson imagine this leading to superpower-like levels of introspection while maintaining relatively normal interactions with the outside world, but it’s very difficult to imagine that kind of distinction could be drawn in practice.

      (Besides, uploading is barking up the wrong tree for AGI purposes. They aren’t artificial intelligence any more than a Jarvik heart is artificial life. Uploads are AB: artificial brains. It might be a way to live longer, but that’s about it… and I personally prefer longevity solutions that let me continue receiving the meaty inputs I so heartily enjoy.)

  • Alex Godofsky

    > There would be little interest in tools and methods specialized to be useful “for dummies.”

    To a first approximation these tools and methods do not exist.

    Software design patterns are not used to prevent bad programmers from making mistakes. Software design patterns are used to prevent *all* programmers from making mistakes.

    Writing large pieces of software involves computational problems that are genuinely difficult. Tools like abstraction aren’t just crutches for non-geniuses; they reduce the complexity of the problem for all programmers. Thus even a community of brilliant em programmers will continue to rely on them. Programmer ability gains will be largely allocated towards making more powerful or complicated software rather than towards reducing reliance on design patterns.

    I also question the implicit assumption that ems will make us better at writing software faster than they make us better at inventing faster hardware.

    • http://overcomingbias.com RobinHanson

      I didn’t say that all software tools are for dummies. And the claim you saw as about hardware vs. software is that em-running software will improve less fast than other software. Surely many tools are designed with an eye for the distribution of intelligence in the population of users.

      • Alex Godofsky

        My claim is that the vast majority of effort in the field of software development is directed at problems that are not unique to bad programmers.

        I claim that instead, the big problems in software development (and the ones the most effort is spent on ameliorating) are the sort of problem that affect good and bad programmers ~equally. Tools that mitigate these problems therefore are of ~equal benefit to good and bad programmers.

        Consequently, changes in the distribution of programmer skill do not have strong effects on the optimal allocation of effort towards tools, nor even the allocation among the different types of tools.

  • solipsist

    Nit pick:

    > Tools like **automated type checking** and garbage collection would tend to be done in parallel, or not at all.

    This seems weird to point out. Type checking is done _when the programmer writes the code_, not when the program is executing. Even if it were completely serial, compiling even a very large program is going to be billions of times less computationally intensive than simulating a human brain. Google recompiles their _entire codebase_, generating hundreds of thousands of artifacts, every few seconds.

    > When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent.

    Why is software development sped relative to changes in business conditions? Are you arguing that the business world is more serial than the software development world [and therefore harder to speed up](http://en.wikipedia.org/wiki/Amdahl's_law)? Why is the business world harder to parallelize?

    • http://overcomingbias.com RobinHanson

      There are languages in which some type checking is done at run time.

  • david condon

    After reading it, I would recommend reducing the amount of technical jargon if this is meant for popular consumption; even when some details are lost in the translation. Examples:
    Serial = network speeds
    Parallel = processing power
    High-level abstraction = user-friendly
    Tools = programming software

  • John_Maxwell_IV

    “The increasing quantity of software purchased has also led to larger software projects, which involve more engineers.”

    Do you have a citation for this claim? At first glance it sounds wrong to me. I would say that better hardware has lead to programming environments that favor programmer productivity over machine efficiency (e.g. Python over C). This combined with the proliferation of open-source software libraries has made it easier than ever for a small team to make a big impact. For example, WhatsApp had a team of ~50 people and was acquired for billions of dollars. And Instagram had a double-digit number of employees and was acquired for a nine-digit number of dollars. Silicon Valley at its best is a small, egalitarian team of highly intelligent engineers getting to know their customers thoroughly and doing careful, focused work to serve their needs.

    • David Condon

      By historical standards, 50 is gigantic. In the 1950s, I don’t think there were even 50 full-time programmers in the entire country. By the 70s, a typical programming team still consisted of one or two people. Team sizes in the single digits were standard practice as recently as the early 90s.

  • Greg Perkins

    Have you thought/written about what kind of socio-computational infrastructure would be practically required to support the em economy?

    > Thus over time serial software will become less valuable, relative to ems and parallel software.

    Ems are just a special category of parallel software, right? What kind of software are current human programmers?