Author Archives: Robin Hanson

Baum on Age of Em

In the Journal Futures, Seth Baum gives the first academic review of Age of Em. First, some words of praise:

The book is by far the most detailed description of the em world available. .. If you are wondering about what some aspect of an em world might look like, odds are good that a description can be found somewhere in the book. .. The breadth of research covered is impressive. .. The Age of Em is a thoughtful and original work of futures studies, bringing an insightful social science perspective to the topic of mind uploading. The book covers wide ground about the nature of the em world, offering a valuable resource for anyone interested in the topic, or about futures studies more generally. The book is accessibly written and could be read by undergraduates or even advanced high school students, though experts from any discipline will also find much of interest. The book is especially worthwhile for anyone who could benefit from an overview of contemporary social science research, which the book provides in abundance.

I am pleased to hear this, and largely agree. But of course Baum also has criticism.

The book’s methodology is rooted in extrapolation of current social science to em world conditions. At times, the extrapolations seem strained. For example, a claim that economic benefits of urban agglomeration would continue in an em world (p.215) cites Morgan (2014), which is a popular media description of Amazon’s large data centers. It is true that Amazon data centers are large, and it may well be true that em cities are large for similar reasons, but the latter does not necessarily follow from the former.

I’m surprised Baum has any doubts that economies of agglomeration would continue. I’ve taught urban economics, and it seems to me that we understand in some detail many of the forces that push for and against the clumping of economic activities. Since one of the main forces resisting concentration, travel congestion, greatly reduces in an em world, while most forces pushing concentration remain strong, it seems to me a quite safe prediction that ems would clump together in cities.

In other stretches, Hanson’s personal tastes are apparent. This is seen, for example, in discussions of combinatorial auctions and prediction markets (p.184-188), two schemes for adapting market mechanisms for social decision making. Prediction markets in particular are a longstanding interest of Hanson’s. The book’s discussion of these topics says little about the em world and seems mainly oriented towards promoting them for society today. The reader gets the impression that Hanson wishes society today was more economically efficient and rational in a certain sense, and that he has embedded his hopes for a better world into his vision of ems.

It is a common habit of futurists to populate their imagined futures with many visions of more efficient physical technologies, but to presume no gains in social technologies. I think we should instead also expect the adoption of more efficient social technologies, especially in a more competitive world such as the em world would be. Which is why I tried to outline some changes of this sort. But it seems Baum prefers the usual habit.

The book proposes that those humans who own parts of the em economy “could retain substantial wealth”, but everyone else is “likely to starve” (p.336). To my eyes, this seems overly optimistic: if ems are so much smarter than humans, and if they have such a profit motive, then surely they could figure out how to trick, threaten, or otherwise entice humans into giving up their wealth. Humans would likely die out or, at best, scrape by in whatever meager existence the ems leave them with.

In history, there have always been some people who were much smarter than others, and yet the less smart have usually retained wealth, often great wealth. This remains true today. The correlation between individual wealth and intelligence is weak, and mostly not due to tricks or threats. Retirees today continue to control great wealth even though they tend to be physically, mentally, and socially less powerful. Clearly there isn’t a strong historical rule that the smartest take all wealth from the rest. Humans should probably be more concerned that the age of em would only last a year or two, and that we don’t know what happens next.

Given these dire prospects for humans, one might question whether it would be good to create ems in the first place. Unfortunately, the book does not consider this topic in any detail. The book is pro-em, even proposing to “subsidize the development of related technologies to speed the arrival of this transition [to the em era], and to subsidize its smoothness, equality, or transparency to reduce disruptions and inequalities in that transition” (p.375). But this position is tenuous at best and quite possibly dangerous.

I’m quite confident that the topic of evaluation of the em world will not be neglected, as so many seem so eager to discuss it. So I tried not to take much of a position in the book on the overall value of the em world, and a section toward the end of the book neutrally reviews many approaches to such evaluation. But it seems that readers will always try to find such a position in a book like mine, and then complain that the position is insufficiently defended.

GD Star Rating
loading...
Tagged as: , ,

A Book Response Prediction

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident. Schopenhauer, 1788-1860.

My next book won’t come out until January, and reviews of it will appear in the weeks and months after that. But now, a year in advance, I want to make a prediction about the main objections that will be voiced. In particular I predict that two of the most common responses will a particular opposing pair.

If you recall, our book is about hidden motives (a.k.a., “X is not about Y):

We’re afraid to acknowledge the extent of our own selfishness. .. The Elephant in the Brain aims to .. blast floodlights into the dark corners of our minds. .. Why do humans laugh? Why are artists sexy? Why do people brag about travel? Why do we so often prefer to speak rather than listen?

Like all psychology books, The Elephant in the Brain examines many quirks of human cognition. But this book also ventures where others fear to tread: into social critique. The authors show how hidden selfish motives lie at the very heart of venerated institutions like Art, Education, Charity, Medicine, Politics, and Religion.

I predict that one of the most common responses will be something like “extraordinary claims require extraordinary evidence.” While the evidence we offer is suggestive, for claims as counterintuitive as ours on topics as important as these, evidence should be held to a higher standard than the one our book meets. We should shut up until we can prove our claims.

I predict that another of the most common responses will be something like “this is all well known.” Wise observers have known and mentioned such things for centuries. Perhaps foolish technocrats who only read in their narrow literatures are ignorant of such things, but our book doesn’t add much to what true scholars and thinkers have long known.

These responses are opposing in the sense that it is hard to find a set of positions from which one could endorse both responses.

I have not phrased this prediction so as to make it very easy to check later if its right. I have also not offered a specific probability. Given the many ambiguities here, this seems right to me.

GD Star Rating
loading...
Tagged as:

Reversible Simulations 

Physicist Sabine Hossenfelder is irate that non-physicists use the hypothesis that we live in a computer simulation to intrude on the territory of physicists:

The simulation hypothesis, as it’s called, enjoys a certain popularity among people who like to think of themselves as intellectual, believing it speaks for their mental flexibility. Unfortunately it primarily speaks for their lacking knowledge of physics.

Among physicists, the simulation hypothesis is not popular and that’s for a good reason – we know that it is difficult to find consistent explanations for our observations. After all, finding consistent explanations is what we get paid to do.

Proclaiming that “the programmer did it” doesn’t only not explain anything – it teleports us back to the age of mythology. The simulation hypothesis annoys me because it intrudes on the terrain of physicists. It’s a bold claim about the laws of nature that however doesn’t pay any attention to what we know about the laws of nature. If you try to build the universe from classical bits, you won’t get quantum effects, so forget about this – it doesn’t work. ..

For the purpose of this present post, the details don’t actually matter all that much. What’s more important is that these difficulties of getting the physics right are rarely even mentioned when it comes to the simulation hypothesis. Instead there’s some fog about how the programmer could prevent simulated brains from ever noticing contradictions, for example contradictions between discretization and special relativity.

But how does the programmer notice a simulated mind is about to notice contradictions and how does he or she manage to quickly fix the problem? If the programmer could predict in advance what the brain will investigate next, it would be pointless to run the simulation to begin with. So how does he or she know what are the consistent data to feed the artificial brain with when it decides to probe a specific hypothesis? Where does the data come from? The programmer could presumably get consistent data from their own environment, but then the brain wouldn’t live in a simulation. (more)

Video games today typically only compute visual and auditory details of scenes that players are currently viewing, and then only to a resolution players are capable of noticing. The physics, chemistry, etc. is also made only as consistent and exact as typical players will notice. And most players don’t notice enough to bother them.

What if it were physicists playing a video game? What if they recorded a long video game period from several points of view, and were then able go back and spend years scouring their data carefully? Mightn’t they then be able to prove deviations? Of course, if they tried long and hard enough. And all the more so if the game allowed players to construct many complex measuring devices.

But if the physicists were entirely within a simulation, then all the measuring, recording, and computing devices available to those physicists would be under full control of the simulators. If devices gave measurements showing deviations, the output of those devices could just be directly changed. Or recordings of previous measurements could be changed. Or simulators could change the high level output of computer calculations that study measurements. Or they might perhaps more directly change what the physicists see, remember, or think.

In addition, within a few decades computers in our world will typically use reversible computation (as I discuss in my book), wherein costs are low to reverse previous computations. When simulations are run on reversible computers, it becomes feasible and even cheap to wait until a simulation reveals some problem, and then reverse the simulation back to a earlier point, make some changes, and run the simulation forward again to see it the problem is avoided. And repeat until the problem is in fact avoided.

So those running a simulation containing physicists who could detect deviations from some purported physics of the simulated world could actually wait until some simulated physicist claimed to have detected a deviation. Or even wait until an article based on their claim was accepted for peer review. And then back up the simulation and add more physics detail to try to avoid the problem.

Yes, to implement a strategy like this those running the simulation might have to understand the physics issues as well as did the physicists in the simulation. And they’d have to adjust the cost of computing their simulation to the types of tests that physicists inside examined. In the worse case, if the simulated universe seemed to allow for very large incompressible computations, then if the simulators couldn’t find a way to fudge that by changing high level outputs, they might have to find an excuse to kill off the physicists, to directly change their thoughts, or to end the simulation.

But overall it seems to me that those running a simulation containing physicists have many good options short of ending the simulation. Sabine Hossenfelder goes on to say:

It’s not that I believe it’s impossible to simulate a conscious mind with human-built ‘artificial’ networks – I don’t see why this should not be possible. I think, however, it is much harder than many future-optimists would like us to believe. Whatever the artificial brains will be made of, they won’t be any easier to copy and reproduce than human brains. They’ll be one-of-a-kind. They’ll be individuals.

It therefore seems implausible to me that we will soon be outnumbered by artificial intelligences with cognitive skills exceeding ours. More likely, we will see a future in which rich nations can afford raising one or two artificial consciousnesses and then consult them on questions of importance.

Here I just don’t see what Sabine can be thinking. Today we can quickly make many copies of most any item that we can make in factories from concise designs. Yes, quantum states have a “no-cloning theorem”, but even so if we knew of a good quantum state to start a system in, we should be able to create many such systems that start in that same state. And I know of no serious claim that human minds make important use of unclonable quantum states, or that this would prevent creating many such systems fast.

Yes, biological systems today can be hard to copy fast, because they are so crammed with intricate detail. But as with other organs like bones, hearts, ears, eyes, and skin, most of the complexity in biological brain cells probably isn’t used directly for the function that those cells provide the rest of the body, in this case signal processing. So just as emulations of bones, hearts, ears, eyes, and skin can be much simpler than those organs, a brain emulation should be much simpler than a brain.

Maybe Sabine will explain her reasoning here.

GD Star Rating
loading...

Darwin’s Unfinished Symphony

In one kind of book, a smooth talker who has published many books takes a fraction of a year to explore a topic that has newly piqued their curiosity. In another kind of book, someone who has spend a lifetime wrestling with a big subject tries to put it all together into an integrated synthesis. Sometimes they even synthesis an entire research group or tradition. Kevin Laland’s book Darwin’s Unfinished Symphony is this second kind of book, a kind I much prefer.

Leland’s research group has for decades studied the origins of human cultural evolution. They’ve learned a lot. In particular they attribute humanity’s unique ability to accumulate culture over a long time to our very high *reliability in transferring practices. Humans achieve such high reliability both by being smart, and by our unusual ability to *teach*, i.e., changing our behavior to make it easier for others to copy our practices. Just how high a reliability is required is shown by the example of Tasmania, where several thousand isolated humans slowly lost many skills and tools over thousands of years. It seems even human level intelligence and teaching isn’t good enough if your population is only a few thousand.

In both this book and in Henrich’s The Secret of our Success, I detect a tone of conflict between those who emphasize the value of smart brains for evolving culture, and those who emphasize the value of smart brains for managing the complex politics of large social groups. For example, in his book Laland says:

The currently dominant view is that the primate brain expanded to cope with the demands of a rich social life, including the aforementioned Machiavellian skills required to deceive and manipulate others, and the cognitive skills necessary to maintain alliances and trans third-party relationships. The most important data supporting this hypothesis is a positive relationship between measures of group sizes and relative brain size. In our analyses, group size remained as an important predictor of relative brain size, but also proved a significant secondary predictor of primate intelligence and social learning. However, group size was neither the sole, not the most important, predictor of brain size or intelligence in our models. Combined with our earlier find that social group sizes does not predict the performance of primates in laboratory tests of cognition, this reinforced our view that there was more to primate brain evolution than selection for social intelligence. (p.144)

As far as I can remember, all of the cultural learning examples in both the Laland and Henrich books are outside of the domain of Machiavellian social competition. But cultural learning can also be useful there, and so even if the strongest selection pressure on brains was for social competition, that is completely consistent with a strong selection for increasingly reliable abilities to learn and teach. Of course the overall long term increase in humanity’s power and scope is probably less directly due to better social competition skills. But from each creature’s point of view that is mostly a side effect relative to their struggle to survive and reproduce.

GD Star Rating
loading...

Imagine Philosopher Kings

I just read Joseph Heath’s Enlightenment 2.0 (reviewed here by Alex). Heath is a philosopher who is a big fan of “reason,” which he sees as an accidentally-created uniquely-human mental capacity offering great gains in generality and accuracy over our other mental capacities. However, reason comes at the costs of being slow and difficult, requiring fragile social and environmental supports, and going against our nature.

Heath sees a recent decline in reliance on reason within our political system, which he blames much more on the right than the left, and he has a few suggestions for improvement. He wants the political process to take longer to consider each choice, to focus more on writing relative to sound and images, and to focus more on longer essays instead of shorter quips. Instead of people just presenting views, he wants more more cross-examination and debate. Media coverage should focus more on experts than on journalists. (Supporting quotes below.)

It seems to me that academic philosopher Heath’s ideal of reason is the style of conversation that academic philosophers now use among themselves, in journals, peer review, and in symposia. Heath basically wishes that political conversations could be more like the academic philosophy conversations of his world. And I expect many others share his wish; there is after all the ancient ideal of the “philosopher king.”

It would be interesting if someone would explore this idea in detail, by trying to imagine just what governance would look like if it were run similar to how academic philosophers now run their seminars, conferences, journals, and departments. For example, imagine requiring a Ph.D. in philosophy to run for political office, and that the only political arguments that one could make in public were long written essays that had passed a slow process of peer review for cogency by professional philosophers. Bills sent to legislatures also require such a peer-reviewed supporting essay. Imagine further incentives to write essays responding to others, rather than just presenting one’s one view. For example, one might have to publish two response essays before being allowed to publish one non-response essay.

Assume that this new peer review process managed to uphold intellectual standards roughly as well as does the typical philosophy subfield journal today. Even then, I don’t have much confidence that this would go well. But I’m not sure, and I’d love to see someone who knows the internal processes of academic philosophy in some detail, and also knows common governance processes in some detail, work out a plausible guess for what a direct combination of these processes would look like. Perhaps in the form of a novel. I think we might learn quite a lot about what exactly can go right and wrong with reason.

Other professions might plausibly also wish that we ran the government more according to the standards that they use internally. It could also be interesting to imagine a government that was run more like how an engineering community is run, or how a community of physicists is run. Or even a community of spiritualists. Such scenarios could be both entertaining and informative.

Those promised quotes from Enlightenment 2.0: Continue reading "Imagine Philosopher Kings" »

GD Star Rating
loading...
Tagged as: ,

Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
loading...
Tagged as: , ,

The Robot Protocol

Talking with a professor of robotics, I noticed a nice approachable question at the intersection of social science, computer science, and futurism.

Someday robots will mix with humans in public, walking our streets, parks, hospitals, and stores, driving our streets, swimming our waterways, and perhaps flying our skies. Such public robots may vary enormously in their mental and physical capacities, but if they are to mix smoothly with humans in public they then we will probably expect them to maintain a minimal set of common social capacities. Such as responding sensibly to “Who are you?” and “Get out of my way.” And the rest of us would have a new modified set of social norms for dealing with public robots via these capacities.

Together these common robot capacities and matching human social norms would become a “robot protocol.” Once ordinary people and robots makers have adapted to it, this protocol would be a standard persisting across space and time, and relatively hard to change. A standard that diverse robots could also use when interacting with each other in public.

Because it would be a wide and persistent standard, the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.

(Of course this general robot protocol isn’t the only thing that would coordinate robot and human interactions. There’d also be many other more context-dependent protocols.)

One simple option would be to expect each public robot to be “tethered” via fast robust communication to a person on call who can rapidly respond to all queries that the robot can’t handle itself. But it isn’t clear how sufficient this approach will be for many possible queries.

Robots would probably be expected to find and comply with any publicly posted rules for interacting in particular spaces, such as the rules we often post for humans on signs. Perhaps we will simplify such rules for robots. In addition, here are some things that people sometimes say to each other in public where we might perhaps want robots to have analogous capacities:

Who are you? What are you doing here? Why are you following me? Please don’t record me. I’m serving you with this legal warrant. Stop, this is the police! You are not allowed to be here; leave. Non-authorized personnel must evacuate this area immediately. Get out of my way. You are hurting me. Why are you calling attention to me? Can you help me? Can you take our picture? Where is the nearest bathroom? Where is a nearby recharging station? (I may add more here.)

It seems feasible to start now to think about the design of such a robot protocol. Of course in the end a robot protocol might be just a social convention without the force of law, and it may result more from decentralized evolution than centralized design. Even so, we may now know enough about human social preferences and the broad outlines of the costs of robot capacities to start to usefully think about this problem.

GD Star Rating
loading...
Tagged as: , ,

The Great Cycle Rule

History contains a lot of data, but when it comes to the largest scale patterns, our data is very limited. Even so, I think we’d be crazy not to notice whatever patterns we can find at those largest scales, and ponder them. Yes we can’t be very sure of them, but we surely should not ignore them.

I’ve said that history can be summarized as a sequence of roughly exponential growth modes. The three most recent modes were the growth of human foragers, then of farmers, then of industry. Roughly, foragers doubled every quarter million years, farmers every thousand years, and industry every fifteen years. (Before humans, animal brains doubled roughly every 35 million years.)

I’ve previously noted that this sequence shows some striking patterns. Each transition between modes took much less than a previous doubling time. Modes have gone through a similar number of doublings before the next mode appeared, and the factors by which growth rates increased have also been similar.  In addition, the group size that typified each mode was roughly the square of that of the previous mode, from thirty for foragers to a thousand for farmers to a million for industry.

In this post I report a new pattern, about cycles. Some cycles, such as days, months, and years, are common to most animals days, months, years. Other cycles, such as heartbeats lasting about a second and lifetimes taking threescore and ten, are common to humans. But there are other cycles that are distinctive of each growth mode, and are most often mentioned when discussing the history of that mode.

For example, the 100K year cycle of ice ages seems the most discussed cycle regarding forager history. And the two to three century cycle of empires, such as documented by Turchin, seems most discussed regarding the history of farmers. And during our industry era, it seems we most discuss the roughly five year business cycle.

The new pattern I recently noticed is that each of these cycles lasts roughly a quarter to a third of its mode’s doubling time. So a mode typically grows 20-30% during one period of its main cycle. I have no idea why, but it still seems a pattern worth noting, and pondering.

If a new mode were to follow these patterns, it would appear in the next century, after a transition of ten years or less, and have a doubling time of about a month, a main cycle of about a week, and a typical group size of a trillion. Yes, these are only very rough guesses. But they still seem worth pondering.

GD Star Rating
loading...
Tagged as:

I’m In Europe This Week

Catch me at one of six talks I’ll give in Europe this week on Age of Em:

GD Star Rating
loading...
Tagged as:

Big Software Firm Bleg

I haven’t yet posted much on AI as Software. But now I’ll say more, as I want to ask a question.

Someday ems may replace humans in most jobs, and my first book talks about how that might change many things. But whether or not ems are the first kind of software to replace humans wholesale in jobs, eventually non-em software may plausibly do this. Such software would replace ems if ems came first, but if not then such software would directly replace humans.

Many people suggest, implicitly or explicitly, that non-em software that takes over most jobs will differ in big ways from the software that we’ve seen over the last seventy years. But they are rarely clear on what exact differences they foresee. So the plan of my project is to just assume our past software experience is a good guide to future software. That is, to predict the future, one may 1) assume current distributions of software features will continue, or 2) project past feature trends into future changes, or 3) combine past software feature correlations with other ways we expect the future to differ.

This effort may encourage others to better clarify how they think future software will differ, and help us to estimate the consequences of such assumptions. It may also help us to more directly understand a software-dominated future, if there are many ways that future software won’t greatly change.

Today, each industry makes a kind of stuff (product or service) we want, or a kind of stuff that helps other industries to make stuff. But while such industries are often dominated by a small number of firms, the economy as a whole is not so dominated. This is mainly because there are so many different industries, and firms suffer when they try to participate in too many industries. Will this lack of concentration continue into a software dominated future?

Today each industry gets a lot of help from humans, and each industry helps to train its humans to better help that industry. In addition, a few special industries, such as schooling and parenting, change humans in more general ways, to help better in a wide range of industries. In a software dominated future, humans are replaced by software, and the schooling and parenting industries are replaced by a general software industry. Industry-independent development of software would happen in the general software industry, while specific adaptations for particular industries would happen within those industries.

If so, the new degree of producer concentration depends on two key factors: what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry. Regarding this second factor, it is noteworthy that we now see some pretty big players in the software industry, such as Google, Apple, and Microsoft. And so a key question is the source of this concentration. That is, what exactly are the key advantages of big firms in today’s software market?

There are many possibilities, including patent pools and network effects among customers of key products. Another possibility, however, is one where I expect many of my readers to have relevant personal experience: scale economies in software production. Hence this bleg – a blog post asking a question.

If you are an experienced software professional who has worked both at a big software firm and also in other places, my key question for you is: by how much was your productive efficiency as a software developer increased (or decreased) due to working at a big software firm?  That is, how much more could you get done there that wasn’t attributable to having a bigger budget to do more, or to paying more for better people, tools, or resources. Instead, I’m looking for the net increase (or decrease) in your output due to software tools, resources, security, oversight, rules, or collaborators that are more feasible and hence more common at larger firms. Ideally you answer will be in the form of a percentage, such as “I seem to be 10% more productive working at a big software firm.”

Added 3:45p: I meant “productivity” in the economic sense of the inputs required to produce a given output, holding constant the specific kind of output produced. So this kind of productivity should ignore the number of users of the software, and the revenue gained per user. But if big vs small firms tend to make different kinds of software, which have different costs to make, those differences should be taken into account. For example, one should correct for needing more man-hours to add a line of code in a larger system, or in a more secure or reliable system.

GD Star Rating
loading...
Tagged as: , ,