Tag Archives: Software

Big Software Firm Bleg

I haven’t yet posted much on AI as Software. But now I’ll say more, as I want to ask a question.

Someday ems may replace humans in most jobs, and my first book talks about how that might change many things. But whether or not ems are the first kind of software to replace humans wholesale in jobs, eventually non-em software may plausibly do this. Such software would replace ems if ems came first, but if not then such software would directly replace humans.

Many people suggest, implicitly or explicitly, that non-em software that takes over most jobs will differ in big ways from the software that we’ve seen over the last seventy years. But they are rarely clear on what exact differences they foresee. So the plan of my project is to just assume our past software experience is a good guide to future software. That is, to predict the future, one may 1) assume current distributions of software features will continue, or 2) project past feature trends into future changes, or 3) combine past software feature correlations with other ways we expect the future to differ.

This effort may encourage others to better clarify how they think future software will differ, and help us to estimate the consequences of such assumptions. It may also help us to more directly understand a software-dominated future, if there are many ways that future software won’t greatly change.

Today, each industry makes a kind of stuff (product or service) we want, or a kind of stuff that helps other industries to make stuff. But while such industries are often dominated by a small number of firms, the economy as a whole is not so dominated. This is mainly because there are so many different industries, and firms suffer when they try to participate in too many industries. Will this lack of concentration continue into a software dominated future?

Today each industry gets a lot of help from humans, and each industry helps to train its humans to better help that industry. In addition, a few special industries, such as schooling and parenting, change humans in more general ways, to help better in a wide range of industries. In a software dominated future, humans are replaced by software, and the schooling and parenting industries are replaced by a general software industry. Industry-independent development of software would happen in the general software industry, while specific adaptations for particular industries would happen within those industries.

If so, the new degree of producer concentration depends on two key factors: what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry. Regarding this second factor, it is noteworthy that we now see some pretty big players in the software industry, such as Google, Apple, and Microsoft. And so a key question is the source of this concentration. That is, what exactly are the key advantages of big firms in today’s software market?

There are many possibilities, including patent pools and network effects among customers of key products. Another possibility, however, is one where I expect many of my readers to have relevant personal experience: scale economies in software production. Hence this bleg – a blog post asking a question.

If you are an experienced software professional who has worked both at a big software firm and also in other places, my key question for you is: by how much was your productive efficiency as a software developer increased (or decreased) due to working at a big software firm?  That is, how much more could you get done there that wasn’t attributable to having a bigger budget to do more, or to paying more for better people, tools, or resources. Instead, I’m looking for the net increase (or decrease) in your output due to software tools, resources, security, oversight, rules, or collaborators that are more feasible and hence more common at larger firms. Ideally you answer will be in the form of a percentage, such as “I seem to be 10% more productive working at a big software firm.”

Added 3:45p: I meant “productivity” in the economic sense of the inputs required to produce a given output, holding constant the specific kind of output produced. So this kind of productivity should ignore the number of users of the software, and the revenue gained per user. But if big vs small firms tend to make different kinds of software, which have different costs to make, those differences should be taken into account. For example, one should correct for needing more man-hours to add a line of code in a larger system, or in a more secure or reliable system.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

Added 15Dec: In this book chapter I expand a bit on this post.

GD Star Rating
loading...
Tagged as: , ,

AI As Software Grant

While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)

Who is Open Philanthropy? From their summary:

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

A key paragraph from my proposal:

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.

The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.

So right from the start I’d like to offer this challenge:

Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.

I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.

GD Star Rating
loading...
Tagged as: , , ,

Why Does Software Rot?

Almost a year ago computer scientist Daniel Lemire wrote a post critical of a hypothesis I’ve favored, one I’ve used in Age of Em. On the “better late than never” principle, I’ll finally respond now. The hypothesis:

Systems that adapt to contexts tend to get more fragile and harder to readapt to new contexts.

In a 2012 post I said we see this tendency in human brains, in animal brains, in software, in product design, in species, and in individual cells. There is a related academic literature on design feature entrenchment (e.g., here, here, here, here).

Lemire’s 2015 response:

I am arguing back that the open source framework running the Internet, and serving as a foundation for companies like Google and Apple, is a counterexample. Apache, the most important web server software today, is an old piece of technology whose name is a play on words (“a patched server”) indicating that it has been massively patched. The Linux kernel itself runs much of the Internet, and has served as the basis for the Android kernel. It has been heavily updated… Linus Torvalds wrote the original Linux kernel as a tool to run Unix on 386 PCs… Modern-day Linux is thousands of times more flexible.

So we have evolved from writing everything from scratch (in the seventies) to massively reusing and updated pre-existing software. And yet, the software industry is the most flexible, fast-growing industry on the planet. .. If every start-up had to build its own database engine, its own web server… it would still cost millions of dollars to do anything. And that is exactly what would happen if old software grew inflexible: to apply Apache or MySQL to the need of your start-up, you would need to rewrite them first… a costly endeavour. ..

Oracle was not built from the ground up to run on thousands of servers in a cloud environment. So some companies are replacing Oracle with more recent alternatives. But they are not doing so because Oracle has gotten worse, or that Oracle engineers cannot keep up. When I program in Java, I use an API that dates back to 1998 if not earlier. It has been repeatedly updated and it has become more flexible as a result…

Newer programming languages are often interesting, but they are typically less flexible at first than older languages. Everything else being equal, older languages perform better and are faster. They improve over time. .. Just like writers of non-fiction still manage to write large volumes without ending with an incoherent mass, software programmers have learned to cope with very large and very complex endeavours. ..

Programmers, especially young programmers, often prefer to start from scratch. .. In part because it is much more fun to write code than to read code, while both are equally hard. That taste for fresh code is not an indication that starting from scratch is a good habit. Quite the opposite! ..
“Technical debt” .. is a scenario whereas the programmers have quickly adapted to new circumstances, but without solid testing, documentation and design. The software is known to be flawed and difficult, but it is not updated because it “works”. Brains do experience this same effect.

I have long relied on a distinction between architecture and content (see here, here, here, here, here). Content is the part of a system that it is easy to add to or change without changing the rest of the system; architecture is the other part. (Yes, there is a spectrum.) The more content that is fitted to an architecture, and the bigger is that architecture, the harder it becomes to change the architecture.

Lemire’s examples seem to be of systems which grow long and large because they don’t change their core architecture. When an architecture is well enough matched to a stable problem, systems build on it can last long, and grow large, because it is too much trouble to start a competing system from scratch. But when different approaches or environments need different architectures, then after a system grows large enough, one is mostly forced to start over from scratch to use a different enough approach, or to function in a different enough environment.

This is probably why “Some companies are replacing Oracle with more recent alternatives.” Oracle’s architecture isn’t well enough matched. I just can’t buy Lemire’s suggestion that the only reason people ever start new software systems from scratch today is the arrogance and self-indulgence of youth. It happens way far too often to explain that way.

GD Star Rating
loading...
Tagged as: ,

Perfect Bits

Did you know that your phone, pad, and laptop are all “computers” wherein all relevant info is stored in “bits”? And did you further know that you can get tools to let you very easily change almost any of those bits? Since you can change most any bits in these devices, you need never tolerate any imperfections in anything that results from those bits. Thus you should never see any disagreeable screen or menu or feature or outcome in any app in any of those systems for more than a short moment. Same for books, music, and movies. After all, as soon as you notice any imperfection, why you’ll open your tool, change the bad bits, and abra cadabra, the system will be perfect again. Right?

In Mind Uploading Will Replace the Need for Religion, “Award-winning #1 Bestseller Philosophy & Sci-Fi Visionary” and Transhumanist Party presidential candidate Zoltan Istvan applies the same penetrating insight to future ems:

Being able to upload our entire minds into a computer is probably just 25-35 years off. … As people begin uploading themselves, they’ll also be hacking and writing improved code for their new digital selves. … This influx of better code will eliminate … stupidity and social evil. …

In the future, we may all have avatars—perfectly uploaded versions of ourselves … [who] will help guide us and not allow us to do dumb or terrible things. … Someone trustworthy will always be in our head, advising us of the best path to take. …

This is why the future will be far better than it is now. In the coming digital world, we may be perfect, or very close to it. Expect a much more utopian society for whatever social structures end up existing in virtual reality and cyberspace. But also expect the real world to radically improve. Expect the drug user to have their addictions corrected or overcome. Expect the domestic abuser to have their violence and drive for power diminished. Expect the mentally depressed to become happy. And finally, expect the need for religion to disappear as a real-life god—our near perfect moral selves—symbiotically commune with us. (more)

Well there’s a few complications. Humans don’t always take advice they are given. And since brains were designed by evolution, we expect their code to be harder to read and usefully change than the device app code written by humans. But surely those are only small bumps on our short 35 year road to utopia. Right?

GD Star Rating
loading...
Tagged as: , ,

Em Software Engineering Bleg

Many software engineers read this blog, and I’d love to include a section on software engineering in my book on ems. But as my software engineering expertise is limited, I ask you, dear software engineer readers, for help.

“Ems” are future brain emulations. I’m writing a book on em social implications. Ems would substitute for human workers, and once ems were common ems would do almost all work, including software engineering. What I seek are reasonable guesses on the tools and patterns of work of em software engineers – how their tools and work patterns would differ from those today, and how those would vary with time and along some key dimensions.

Here are some reasonable premises to work from:

  1. Software would be a bigger part of the economy, and a bigger industry overall. So it could support more specialization and pay more fixed costs.
  2. Progress would have been made in the design of tools, languages, hardware, etc. But there’d still be far to go to automate all tasks; more income would still go to rent ems than to rent other software.
  3. After an initial transition where em wages fall greatly relative to human wages, em hardware costs would thereafter fall about as fast as non-em computer hardware costs. So the relative cost to rent ems and other computer hardware would stay about the same over time. This is in stark contrast to today when hardware costs fall fast relative to human wages.
  4. Hardware speed will not rise as fast as hardware costs fall. Thus the cost advantage of parallel software would continue to rise.
  5. Emulating brains is a much more parallel task than are most software tasks today.
  6. Ems would typically run about a thousand times human mind speed, but would vary over a wide range of speeds. Ems in software product development races would run much faster.
  7. It would be possible to save a copy of an em engineer who just wrote some software, a copy available to answer questions about it, or to modify it.
  8. Em software engineers could sketch out a software design, and then split into many temporary copies who each work on a different part of the design, and talk with each other to negotiate boundary issues. (I don’t assume one could merge the copies afterward.)
  9. Most ems are crammed into a few dense cities. Toward em city centers, computing hardware is more expensive, and maximum hardware speeds are lower. Away from city centers, there are longer communication delays.

Again, the key question is: how would em software tools and work patterns differ from today’s, and how would they vary with time, application, software engineer speed, and city location?

To give you an idea of the kind of conclusions one might be tempted to draw, here are some recent suggestions of François-René Rideau: Continue reading "Em Software Engineering Bleg" »

GD Star Rating
loading...
Tagged as: ,