Once More, With Feeling

Sean Carroll’s new best-selling book The Big Picture runs the risk of preaching to the choir. To my mind, it gives a clear and effective explanation of the usual top physicists’ world view. On religion, mysticism, free will, consciousness, meaning, morality, etc. (The usual view, but an unusually readable, articulate, and careful explanation.) I don’t disagree, but then I’m very centered in this physicist view.

I read through dozens of reviews, and none of them even tried to argue against his core views! Yet I have many economist colleagues who often give me grief for presuming this usual view. And I’m pretty sure the publication of this book (or of previous similar books) won’t change their minds. Which is a sad commentary on our intellectual conversation; we mostly see different points of view marketed separately, with little conversation between proponents.

Carroll inspires me to try to make one point I think worth making, even if it is also ignored. My target is people who think philosophical zombies make sense. Zombies are supposedly just like real people in having the same physical brains, which arose the through the same causal history. The only difference is that while real people really “feel”, zombies do not. But since this state of “feeling” is presumed to have zero causal influence on behavior, zombies act exactly like real people, including being passionate and articulate about claiming they are not zombies. People who think they can conceive of such zombies see a “hard question” regarding which physical systems that claim to feel and otherwise act as if they feel actually do feel. (And which other systems feel as well.)

The one point I want to make is: if zombies are conceivable, then none of us will ever have any more relevant info than we do now about which systems actually feel. Which is pretty much zero info! You will never have any info about whether you ever really felt in the past, or will ever feel in the future. No one part of your brain ever gets any info from any other part of your brain about whether it really feels.

These claims all follow from our very standard and well-established info theory. We get info about things by interacting with them, so that our states become correlated with the states of those things. But by assumption this hypothesized extra “feeling” state never interacts with anything. The actual reason why you feel compelled to assert very confidently that you really do feel has no causal connection with whether you actually do really feel. You would have been just as likely to say it if it were not true. What could possibly be the point of hypothesizing and forming beliefs about states about which one can never get any info?

If you have learned anything about overcoming bias, you should be very suspicious of such beliefs, and eager for points of view where you don’t have to rely on possibly-false and info-free beliefs. Carroll presents such a point of view:

There’s nothing more disheartening than someone telling you that the problem you think is most important and central isn’t really a problem at all. As poetic naturalists, that’s basically what we’ll be doing. .. Philosophical zombies are simply inconceivable, because “consciousness” is a particular way of talking about the behavior of certain physical systems. The phrase “experiencing the redness of red” is part of a higher-level vocabulary we use to talk about the emergent behavior of the underlying physical system, not something separate from the physical system.

There’s not much to it, but that’s as it should be. I agree with Carroll; there literally isn’t anything to talk about here.

GD Star Rating
loading...
Tagged as: , ,

Against Prestige

My life has been, in part, a series of crusades. First I just wanted to understand as much as possible. Then I focused on big problems, wondering how to fix them. Digging deeper I was persuaded by economists: our key problems are institutional. Yes we can have lamentable preferences and cultures. But it is hard to find places to stand and levers to push to move these much, or even to understand the effects of changes. Institutions, in contrast, have specific details we can change, and economics can say which changes would help.

I learned that the world shows little interest in the institutional changes economists recommend, apparently because they just don’t believe us. So I focused on an uber institutional problem: what institutions can we use to decide together what to believe? A general solution to this problem might get us to believe economists, which could get us to adopt all the other economics solutions. Or to believe whomever happens to be right, when economists are wrong. I sought one ring to rule them all.

Of course it wasn’t obvious that a general solution exists, but amazingly I did find a pretty general one: prediction markets. And it was also pretty simple. But, alas, mostly illegal. So I pursued it. Trying to explain it, looking for everyone who had said something similar. Thinking and hearing of problems, and developing fixes. Testing it in the lab, and in the field. Spreading the word. I’ve been doing this for 28 years now. (Began at age 29.)

And I will keep at it. But I gotta admit it seems even harder to interest people in this one uber solution than in more specific solutions. Which leads me to think that most who favor specific solutions probably do so for reasons other than the ones economists give; they are happy to point to economist reasons when it supports them, and ignore economists otherwise. So in addition to pursuing this uber fix, I’ve been studying human behavior, trying to understand why we seem so disinterested.

Many economist solutions share a common feature: a focus on outcomes. This feature is shared by experiments, incentive contracts, track records, and prediction markets, and people show a surprising disinterest in all of them. And now I finally think I see a common cause: an ancient human habit of excess deference to the prestigious. As I recently explained, we want to affiliate with the prestigious, and feel that an overly skeptical attitude toward them taints this affiliation. So we tend to let the prestigious in each area X decide how to run area X, which they tend to arrange more to help them signal than to be useful. This happens in school, law, medicine, finance, research, and more.

So now I enter a new crusade: I am against prestige. I don’t yet know how, but I will seek ways help people doubt and distrust the prestigious, so they can be more open to focusing on outcomes. Not to doubt that the prestigious are more impressive, but that letting them run the show produces good outcomes. I will be happy if other competent folks join me, though I’m not especially optimistic. Yet. Yet.

GD Star Rating
loading...
Tagged as: , , ,

Caplan Audits Age of Em

When I showed Bryan Caplan an early draft of my book, his main concern was that I didn’t focus enough on humans, as he doesn’t think robots can be conscious. In his first critical post, he focused mainly on language and emphasis issues. But he summarized “the reasoning simply isn’t very rigorous”, and he gave 3 substantive objections:

The idea that the global economy will start doubling on a monthly basis is .. a claim with a near-zero prior probability. ..

Why wouldn’t ems’ creators use the threat of `physical hunger, exhaustion, pain, sickness, grime, hard labor, or sudden unexpected death’ to motivate the ems? .. `torturing’ ems, .. why not?” ..

Why wouldn’t ems largely be copies of the most “robot-like” humans – humble workaholics with minimal personal life, content to selflessly and uncomplainingly serve their employers?

He asked me direct questions on my moral evaluation of ems, so I asked him to estimate my overall book accuracy relative to the standard of academic consensus theories, given my assumptions. Caplan said:

The entire analysis hinges on which people get emulated, and there is absolutely no simple standard academic theory of that. If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

Since I didn’t think how docile are ems matters that much for most of my book, I challenged him to check five random pages. Today, he reports back:

Limiting myself to his chapters on Economics, Organization, and Sociology, [half of the book’s six sections] .. After performing this exercise, I’m more inclined to say Robin’s only 80% wrong. .. My main complaint is that his premises about em motivation are implausible and crucial.

Caplan picked 23 quotes from those pages. (I don’t know how picked; I count ~35 claims.) In one of these (#22) he disputes the proper use of the word “participate”, and in one (#12) he says he can’t judge.

In two more, he seems to just misread the quotes. In #21, I say taxes can’t discourage work by retired humans, and he says but ems work. In #8 I say if most ems are in the few biggest cities, they must also be in the few biggest nations (by population). He says there isn’t time for nations to merge.

If I set aside all these, that leaves 19 evaluations, out of which I count 7 (#1,4,9,13,17,19,20) where he says agree or okay, making me only 63% wrong in his eyes. Now lets go through the 12 disagreements, which fall into five clumps.

In #6, Caplan disagrees with my claim that “well-designed computers can be secure from theft, assault, and disease.” On page 62, I had explained:

Ems may use technologies such as provably secure operating system kernels (Klein et al. 2014), and capability-based secure computing systems, which limit the powers of subsystems (Miller et al. 2003).

In #5, I had cited sources showing that in the past most innovation has come from many small innovations, instead of a few big ones. So I said we should expect that for ems too. Caplan says that should reverse because ems are more homogenous than humans. I have no idea what he is thinking here.

In #3,7, he disagrees with my applying very standard urban econ to ems:

It’s not clear what even counts as urban concentration in the relevant sense. .. Telecommuting hasn’t done much .. why think ems will lead to “much larger” em cities? .. Doesn’t being a virtual being vitiate most of the social reasons to live near others? ..

But em virtual reality makes “telecommuting” a nearly perfect substitute for in-person meetings, at least at close distances. And one page before, I had explained that “fast ems .. can suffer noticeable communication delays with city scale separations.” In addition, many ems (perhaps 20%) do physical tasks, and all are housed in hardware needing physical support.

In #2,23, Caplan disagrees with my estimating that the human fraction of income controlled slowly falls, because he says all ems must always remain absolute slaves; “humans hold 100% of wealth regardless .. ems own nothing.”

Finally, half of his disagreements (#10,11,14,15,16,18) stem from his seeing ems them as quite literally “robot-like”. If not for this, he’d score me as only 31% wrong. According to Caplan, ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Remember, Caplan and I agree that the key driving factor here is that a competitive em world seeks the most productive (per subjective minute) combinations of humans to scan, mental tweaks and training methods to apply, and work habits and organization to use. So our best data should be the most productive people in the world today, or that we’ve seen in history. Yet the most productive people I know are not remotely “robot-like”, at least in the sense he describes above. Can Caplan name any specific workers, or groups, he knows that fit the bill?

In writing the book I searched for literatures on work productivity, and used many dozens of articles on specific productivity correlates. But I never came across anything remotely claiming “robot-like” workers (or tortured slaves) to be the most productive in modern jobs. Remember that the scoring standard I set was not personal intuition but the consensus of the academic literature. I’ve cited many sources, but Caplan has yet to cite any.

From Caplan, I humbly request some supporting citations. But I think he and I will make only limited progress in this discussion until some other professional economists weigh in. What incantations will summon the better spirits of the Econ blogosphere?

GD Star Rating
loading...
Tagged as: ,

Why Does Software Rot?

Almost a year ago computer scientist Daniel Lemire wrote a post critical of a hypothesis I’ve favored, one I’ve used in Age of Em. On the “better late than never” principle, I’ll finally respond now. The hypothesis:

Systems that adapt to contexts tend to get more fragile and harder to readapt to new contexts.

In a 2012 post I said we see this tendency in human brains, in animal brains, in software, in product design, in species, and in individual cells. There is a related academic literature on design feature entrenchment (e.g., here, here, here, here).

Lemire’s 2015 response:

I am arguing back that the open source framework running the Internet, and serving as a foundation for companies like Google and Apple, is a counterexample. Apache, the most important web server software today, is an old piece of technology whose name is a play on words (“a patched server”) indicating that it has been massively patched. The Linux kernel itself runs much of the Internet, and has served as the basis for the Android kernel. It has been heavily updated… Linus Torvalds wrote the original Linux kernel as a tool to run Unix on 386 PCs… Modern-day Linux is thousands of times more flexible.

So we have evolved from writing everything from scratch (in the seventies) to massively reusing and updated pre-existing software. And yet, the software industry is the most flexible, fast-growing industry on the planet. .. If every start-up had to build its own database engine, its own web server… it would still cost millions of dollars to do anything. And that is exactly what would happen if old software grew inflexible: to apply Apache or MySQL to the need of your start-up, you would need to rewrite them first… a costly endeavour. ..

Oracle was not built from the ground up to run on thousands of servers in a cloud environment. So some companies are replacing Oracle with more recent alternatives. But they are not doing so because Oracle has gotten worse, or that Oracle engineers cannot keep up. When I program in Java, I use an API that dates back to 1998 if not earlier. It has been repeatedly updated and it has become more flexible as a result…

Newer programming languages are often interesting, but they are typically less flexible at first than older languages. Everything else being equal, older languages perform better and are faster. They improve over time. .. Just like writers of non-fiction still manage to write large volumes without ending with an incoherent mass, software programmers have learned to cope with very large and very complex endeavours. ..

Programmers, especially young programmers, often prefer to start from scratch. .. In part because it is much more fun to write code than to read code, while both are equally hard. That taste for fresh code is not an indication that starting from scratch is a good habit. Quite the opposite! ..
“Technical debt” .. is a scenario whereas the programmers have quickly adapted to new circumstances, but without solid testing, documentation and design. The software is known to be flawed and difficult, but it is not updated because it “works”. Brains do experience this same effect.

I have long relied on a distinction between architecture and content (see here, here, here, here, here). Content is the part of a system that it is easy to add to or change without changing the rest of the system; architecture is the other part. (Yes, there is a spectrum.) The more content that is fitted to an architecture, and the bigger is that architecture, the harder it becomes to change the architecture.

Lemire’s examples seem to be of systems which grow long and large because they don’t change their core architecture. When an architecture is well enough matched to a stable problem, systems build on it can last long, and grow large, because it is too much trouble to start a competing system from scratch. But when different approaches or environments need different architectures, then after a system grows large enough, one is mostly forced to start over from scratch to use a different enough approach, or to function in a different enough environment.

This is probably why “Some companies are replacing Oracle with more recent alternatives.” Oracle’s architecture isn’t well enough matched. I just can’t buy Lemire’s suggestion that the only reason people ever start new software systems from scratch today is the arrogance and self-indulgence of youth. It happens way far too often to explain that way.

GD Star Rating
loading...
Tagged as: ,

Beware Prestige-Based Discretion

Before the modern world, most jobs had a big physical component. And so physical ability (strength, speed, stamina, coordination, etc.) was one of the main things people tried to show off. Yes, people did try to show off physical abilities on the job. But when people got serious about showing off, they created special off-the-job contests, such as races and games.

These special contests made it much easier for observers to see small ability differences. For example, you might watch messengers all day on the job running from place to place, and though you’d get a vague idea of which ones were faster, you couldn’t see fine differences very well. But a race controls for other variation by having contestants all start at the same time on a line, and all run straight to a finish line. So even if one runner beats another by only a fraction of a second, observers can still see the difference. Other kinds of special contests also reduce noise, making it easier to see smaller ability differences.

When people can choose between competition forums with more and less noise, signaling incentives will induce them to choose forums with less noise. After all, competitors who choose forums with more noise will be seen as trying to hide their lower abilities among the noise.

So if messengers who wanted to show off their running abilities had a lot of discretion about how messenger jobs were arranged, they’d try to make their jobs look a lot like races. Which would help them show off, but would be less effective at getting messages delivered. Which is why people who hire messengers need to pay attention to how fast messages get delivered, and not just to hiring the fastest runners. Just hiring the fastest runners and letting them decide how messages get delivered is a recipe for waste.

In the rest of society, however, we often both try to hire people who seem to show off the highest related abilities, and we let those most prestigious people have a lot of discretion in how the job is structured. For example, we let the most prestigious doctors tell us how medicine should be run, the most prestigious lawyers tells us how law should be run, the most prestigious finance professionals tell us how the financial system should work, and the most prestigious academics tell us how to run schools and research.

This can go very wrong! Imagine that we wanted research progress, and that we let the most prestigious researchers pick research topics and methods. To show off their abilities, they may pick topics and methods that most reduce the noise in estimating abilities. For example, they may pick mathematical methods, and topics that are well suited to such methods. And many of them may crowd around the same few topics, like runners at a race. These choices would succeed in helping the most able researchers to show that they are in fact the most able. But the actual research that results might not be very useful at producing research progress.

Of course if we don’t really care about research progress, or students learning, or medical effectiveness, etc., if what we mainly care about is just affiliating with the most impressive folks, well then all this isn’t much of a problem. But if we do care about these things, then unthinkingly presuming that the most prestigious people are the best to tell us how to do things, that can go very very wrong.

GD Star Rating
loading...
Tagged as: ,

Henry Farrell on Age of Em

There is a difference between predicting the weather, and predicting climate. If you know many details on current air pressures, wind speeds, etc, you can predict the weather nearby a few days forward, but after weeks to months at most you basically only know an overall distribution. However, if there is some fundamental change in the environment, such as via carbon emissions, you might predict how that distribution will change as a result far into the future; that is predicting climate.

Henry Farrell, at Crooked Timber, seems to disagree with Age of Em because he thinks we can only predict social weather, not social climate:

Tyler Cowen says .. Age of Em .. won’t happen. I agree. I enjoyed the book. .. First – the book makes a strong claim for the value of social science in extrapolating likely futures. I am a lot more skeptical that social science can help you make predictions. .. Hanson’s arguments seem to me to rely on a specific combination of (a) an application of evolutionary theory to social development with (b) the notion that evolutionary solutions will rapidly converge on globally efficient outcomes. This is a common set of assumptions among economists with evolutionary predilections, but it seems to me to be implausible. In actually existing markets, we see some limited convergence in the short term on e.g. forms of organization, but this is plausibly driven at least as much by homophily and politics as by the actual identification of efficient solutions. Evolutionary forces may indeed lead to the discovery of new equilibria, but haltingly, and in unexpected ways. .. This suggests an approach to social science which doesn’t aim at specific predictions a la Hanson, so much as at identifying the underlying forces which interact (often in unpredictable ways) to shape and constrain the range of possible futures. ..

In the end, much science fiction is doing the same kind of thing as Hanson ends up doing – trying in a reasonably systematic way to think through the social, economic and political consequences of certain trends, should they develop in particular ways. The aims of extrapolationistas and science fiction writers aims may be different – prediction versus constrained fiction writing but their end result – enriching our sense of the range of possible futures that might be out there – are pretty close to each other. .. it is the reason I got value from his book. ..

So Hanson’s extrapolated future seems to me to reflect an economist’s perspective in which markets have priority, and hierarchy is either subordinated to the market or pushed aside altogether. The work of Hannu Rajaniemi provides a rich, detailed, alternative account of the future in which something like the opposite is true .. [with] vast and distributed hierarchies of exploitation. .. Rajaniemi’s books .. provide a rich counter-extrapolation of what a profoundly different society might look like. .. I don’t know what the future will look like, but I suspect it will be weird in ways that echo Rajaniemi’s way of thinking (which generates complexities) rather than Hanson’s (which breaks them down).

If we can only see forces that shape and constrain the future, but not the distribution of future outcomes, what is the point of looking at samples from the “range of possibilities”? That only seems useful if in fact you can learn things about that range. In which case you are learning about the overall distribution. Isn’t Farrell’s claim about more future “hierarchies of exploitation” relative to “markets” just the sort of overall outcome he claims we can’t know? (Rajaniemi blurbed and likes my book, so I don’t think he sees it as such a polar opposite. And how does hierarchy “generate complexities” while markets “break them down”?) Is Farrell really claiming that there is no overall tendency toward more efficient practices and institutions, making moves away from them just as likely as moves toward them? Are all the insights economic historians think they have gained using efficiency to understand history illusory?

My more charitable interpretation is that Farrell sees me as making forecasts much more confidently than I intend. While I’ve constructed a point prediction, my uncertainty is widely distributed around that point, while Farrell sees me as claiming more concentration. I’ll bet Farrell does in fact see a tendency toward efficiency, and he thinks looking at cases does teach us about distributions. And he probably even thinks supply and demand is often a reasonable first cut approximation. So I’m guessing that, with the right caveat about confidence, he actually thinks my point prediction makes a useful contribution to our understanding of the future.

One clarification. Farrell writes:

One of the unresolved tensions .. Are [ems] free agents, or are they slaves? I don’t think that Hanson’s answer is entirely consistent (or at least I wasn’t able to follow the thread of the consistent argument if it was). Sometimes he seems to suggest that they will have successful means of figuring out if they have been enslaved, and refusing to cooperate, hence leading to a likely convergence on free-ish market relations. Other times, he seems to suggest that it doesn’t make much difference to his broad predictive argument whether they are or are not slaves.

Much of the book doesn’t depend on if ems are slaves, but some parts do, such as the part on how ems might try to detect if they’ve been unwittingly enslaved.

GD Star Rating
loading...
Tagged as: ,

Unauthorized Topics

Tyler posted:

Do I think Robin Hanson’s “Age of Em” actually will happen? A reader has been asking me this question, and my answer is…no! Don’t get me wrong, I still think it is a stimulating and wonderful book. .. But it is best not read as a predictive text, much as Robin might disagree with that assessment. Why not? I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response. Here goes: 1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way.

I titled my response Tyler Says Never Ems, but on twitter he objected:

“no reason to think it will happen” is best summary of my view, not “never will happen.”
…that was one polite way of saying I do not think the scientific consensus is with you on this issue…

I responded:

How does that translate into a probability?
You have to clarify the exact claim you have in mind before we can discuss what the scientific consensus says about it.

But all he would answer is:

“Low”?

Now at GMU econ we often have academics who visit for lunch and take the common academic stance of reluctance to state opinions which they can’t back up with academic evidence. Tyler is usually impatient with that, and pushes such visitors to make best estimates. Yet here it is Tyler who shows reluctance. I hypothesize that he is following this common principle:

One does not express serious opinions on topics not yet authorized by the proper prestigious people.

Once a topic has been authorized, then unless a topic has a moral coloring it is usually okay to express a wide range of opinions on it; it is even often expected that clever people will often take contrarian or complex positions, sometimes outside their areas of expertise. But unless the right serious people have authorized a topic, that topic remains “silly”, and can only be discussed in a silly mode.

Now sometimes a topic remains unauthorized because serious people think everything about it has a low probability. But there are many other causes for topics to be seen as silly. For example, sex was long seen as a topic serious people didn’t discuss, even though we were quite sure sex exists. And even though most everyone is pretty sure aliens must exist out there somewhere, aliens remain a relatively silly subject.

In the case of ems, I interpret Tyler above as noting that the people who seem to him the proper authorities have not yet authorized serious discussion of ems. That is what he means by pointing to experts, saying “no reason” and “scientific consensus,” and yet being unwilling to state a probability, or even clarify which claim he rejects, even though I argued a 1% chance is enough. It explains his initial emphasis on treating my book metaphorically. This is less about probabilities, and more about topic authorization.

Compare the topic of ems to the topic of super-intelligence, wherein a single hand-coded AI quickly improves itself so fast that it can take over the world. As this topic seems recently endorsed by Elon Musk, Bill Gates, and Steven Hawking, it is now seen more as an authorized topic. Even though, if you are inclined to be skeptical, we have far more reasons to doubt we will eventually know how to hand-code software as broadly smart as humans, or vastly better than the entire rest of the world put together at improving itself. Our reason for thinking ems eventually feasible is far more solid.

Yet I predict Tyler would more easily accept an invitation to write or speak on super-intelligence, compared to ems. And I conclude many readers see my book primarily as a bid to put ems on the list of serious topics, and they doubt enough proper prestigious people will endorse that bid. And yes, while if we could talk probabilities I think I have a pretty good case, even my list of prestigious book blurters probably aren’t enough. Until someone of the rank of Musk, Gates, or Hawking endorses it, my topic remains silly.

GD Star Rating
loading...
Tagged as: , ,

Tyler Says Never Ems

There are smart intellectuals out there think economics is all hogwash, and who resent economists continuing on while their concerns have not been adequately addressed. Similarly, people in philosophy of religion and philosophy of mind resent cosmologists and brain scientists continuing on as if one could just model cosmology without a god, or reduce the mind to physical interactions of brain cells. But in my mind such debates have become so stuck that there is little point in waiting until they are resolved; some of us should just get on with assuming particular positions, especially positions that seem so very reasonable, even obvious, and seeing where they lead.

Similarly, I have heard people debate the feasibility of ems for many decades, and such debates have similarly become stuck, making little progress. Instead of getting mired in that debate, I thought it better to explore the consequences of what seems to me the very reasonable positions that ems will eventually be possible. Alas, that mud pit has strong suction. For example, Tyler Cowen:

Do I think Robin Hanson’s “Age of Em” actually will happen? … my answer is…no! .. Don’t get me wrong, I still think it is a stimulating and wonderful book.  And if you don’t believe me, here is The Wall Street Journal:

Mr. Hanson’s book is comprehensive and not put-downable.

But it is best not read as a predictive text, much as Robin might disagree with that assessment.  Why not?  I have three main reasons, all of which are a sort of punting, nonetheless on topics outside one’s areas of expertise deference is very often the correct response.  Here goes:

1. I know a few people who have expertise in neuroscience, and they have never mentioned to me that things might turn out this way (brain scans uploaded into computers to create actual beings and furthermore as the dominant form of civilization).  Maybe they’re just holding back, but I don’t think so.  The neuroscience profession as a whole seems to be unconvinced and for the most part not even pondering this scenario. ..

3. Robin seems to think the age of Em could come about reasonably soon. …  Yet I don’t see any sign of such a radical transformation in market prices. .. There are for instance a variety of 100-year bonds, but Em scenarios do not seem to be a factor in their pricing.

But the author of that Wall Street Journal review, Daniel J. Levitin, is a neuroscientist! You’d think that if his colleagues thought the very idea of ems iffy, he might have mentioned caveats in his review. But no, he worries only about timing:

The only weak point I find in the argument is that it seems to me that if we were as close to emulating human brains as we would need to be for Mr. Hanson’s predictions to come true, you’d think that by now we’d already have emulated ant brains, or Venus fly traps or even tree bark.

Because readers kept asking, in the book I give a concrete estimate of “within roughly a century or so.” But the book really doesn’t depend much on that estimate. What it mainly depends on is ems initiating the next huge disruption on the scale of the farming or industrial revolutions. Also, if the future is important enough to have a hundred books exploring scenarios, it can be worth having books on scenarios with only a 1% chance of happening, and taking those books seriously as real possibilities.

Tyler has spent too much time around media pundits if he thinks he should be hearing a buzz about anything big that might happen in the next few centuries! Should he have expected to hear about cell phones in 1960, or smart phones in 1980, from a typical phone expert then, even without asking directly about such things? Both of these were reasonable foreseen many decades in advance, yet you’d find it hard to see signs of them several decades before they took off in casual conversations with phone experts, or in phone firm stock prices. (Betting markets directly on these topics would have seen them. Alas we still don’t have such things.)

I’m happy to accept neuroscientist expertise, but mainly on in how hard it is to scan brain cells and model them on computers. This isn’t going to come up in casual conversation, but if asked neuroscientists will pretty much all agree that it should eventually be be possible to create computer models of brain cells that capture their key signal processing behavior, i.e., the part that matters for signals received by the rest of the body. They will say it is a matter of when, not if. (Remember, we’ve already done this for the key signal processing behaviors of eyes and ears.)

Many neuroscientists won’t be familiar with computer modeling of brain cell activity, so they won’t have much of an idea of how much computing power is needed. But for those familiar with computer modeling, the key question is: once we understand brain cells well, what are plausible ranges for 1) the number of bits required store the current state of each inactive brain cell, and 2) how many computer processing steps (or gate operations) per second are needed to mimic an active cell’s signal processing.

Once you have those numbers, you’ll need to talk to people familiar with computing cost projections to translate these computing requirements into dates when they can be met cheaply. And then you’d need to talk to economists (like me) to understand how that might influence the economy. You shouldn’t remotely expect typical neuroscientists to have good estimates there. And finally, you’ll have to talk to people who think about other potential big future disruptions to see how plausible it is that ems will be the first big upcoming disruption on the scale of the farming or industrial revolutions.

GD Star Rating
loading...
Tagged as: ,

Future Fears

People tend to act to help themselves. Sometimes that is good, and sometimes it is bad. We economists distinguish situations where such acts on net 1) help others, 2) hurt others less than they help oneself, and 3) hurt others more they help oneself. We see only type #3 acts as bad, and the others as good.

However, I’m coming to realize that most people actually use a different criteria; they care more about loyalty than efficiency. That is, they ask: are the acts subject to “our” prestige control? How well can “we”, by applying or changing our common notion of prestige, shame people to make them stop, or praise people to make them start?

We fear powerful people who feel free to defy us. When they can make big changes to the world, and put only minor weight on our prestige influence. We are afraid of this even when their actions have so far been of type #1, benefiting us. We fear that their inclination to be helpful could change after they accumulate enough power.

This is the standard attitude of foragers, as described by Boehm in Hierarchy in the Forest, where the main fear was individuals strong enough to defy the consensus of their local band. It is also echoed in the classic “illicit dominator” fictional villain. (A “dominator” needs only a source of power that can defy prestige.) In schoolyards, kids have long sought to ridicule nerds who submit to teachers, instead of joining other kids in resisting teacher dominance.

In the classic tv show Survivor, participants tended to vote off the island opponents strong enough to earn immunity from group votes, no matter what those people’s other virtues. Similarly, in office politics workers who feel productive enough to not need to make arbitrary displays of submission are often seen as “difficult”; putting them in their place becomes a priority.

In larger politics today, the main villains are powers who feel free to defy national or world culture’s regarding proper behavior. Criminals (and “terrorists”) and foreign powers, especially in war, obviously, but also one’s own government unless it uses democracy or something to show its submission to local prestige. In the past, when religion was stronger, churches demanded so much submission that they were vulnerable to being labelled illicit dominators. Politics has often been about gaining support for one power via seeing it as protecting us from other powers.

Today, our other main candidate for illicit dominators are for-profit firms. Bigness triggers forager suspicions all by itself, ordering employees about adds a vivid image of dominance, and a for-profit status declares the limited influence of prestige. So we are very suspicious of big organization choices, especially for-profits, and especially regarding employees. We want to regulate their prices and quality, and especially how they hire, fire, and promote. We mostly don’t trust competition between firms to induce them to benefit us; yeah that might work sometimes, but more direct control feels more reliable. (Even if it actually isn’t.)

All of this makes it pretty easy to predict our fears regarding the future. Foreign powers create the classic apocalyptic conflict, and criminals going wild is the classic post-apocalyptic fear. A foreign power winning over us is the classic alien war allegory. Governments being non-democratic, and acquiring new powers, describes most of the new young adult dystopias. Sometimes there’s a new church with too much power, defying reader prestige rankings.

But if you imagine religions, governments, and criminals not getting too far out of control, and a basically capitalist world, then your main fears are probably going to be about for-profit firms, especially regarding how they treat workers. You’ll fear firms enslaving workers, or drugging them into submission, or just tricking them with ideology. In this way firms might make workers into hyper submissive “inhuman robots”, with no creativity, initiative, or leisure, possibly even no socializing, sex, music, or laughter, and maybe just maybe no consciousness at all.

And if you are one of the rare people who don’t even fear firms, because you see competition as disciplining them, well you can just fear technology itself being out of control. No one has been driving the technology train; tech mostly just appears and gets used when some find that in their interest, regardless of the opinions of larger communities of prestige. One can fear that this sort of competition and tech driven change will be the force that makes human workers into “inhuman robots.” Making you eager for a world government (or a super-intelligence) to take control of tech change.

This framework seems to successfully predict the main future fears raised early in the industrial revolution. And also the main concerns about the scenario of my book. Of course the fact that we may be primed to have such concerns, regardless of their actual relevance, doesn’t make them wrong. But it does mean we should look at them carefully.

GD Star Rating
loading...
Tagged as: ,

Guardian on Age of Em

Age of Em is the “book of the day” today at the Guardian newspaper, the 5th most widely read one in the world. Reviewer Steven Poole hates the em world:

The Age of Em is a fanatically serious attempt .. to use economic and social science to forecast in fine detail how this world (if it is even possible) will actually work. The future it portrays is very strange and, in the end, quite horrific for everyone involved. .. This hellish cyberworld is quite cool to think about in a dystopian Matrixy way, although the book is much drier than fiction.

I’m fine with people not liking the em world, if they understand it. But disliking the world also seems to translate into disliking my analysis. My point by point responses:

Hanson says it reads more like an encyclopedia. But if it’s an encyclopedia, what are its sources?

References take 31 pages, others have complained of too many cites, and you complain of dry text. Yet you really wanted more cites & references?

“Today,” he complains, “we take far more effort to study the past than the future, even though we can’t change the past.” Yes, you might respond: that is because we literally cannot “study” the future – because either it doesn’t exist or (in the block-universe model of time) it does exist but is completely inaccessible to us.

We infer theories from data on the present and past. The whole reason for theory is to help us infer things where we don’t have data. Like the future. That is what theorists do. So we can study the future by applying our best theories, as I tried to do in the book.

Given that, the book’s confidence in its own brilliantly weird extrapolations is both impressive and quite peculiar. Hanson describes his approach as that of “using basic social theory, in addition to common sense and trend projection, to forecast future societies”. The casual use of “common sense” there should, as always, ring alarm bells. And a lot of the book’s sense is arguably quite uncommon.

Here you insinuate that much is wrong, but you don’t actually point out anything specific as wrong.

The governing tone is strikingly misanthropic, despairing of current humans’ “maladaptation” to the environment.

How is it remotely “hating” of people to see recent behavior as more evolutionarily maladaptive?

And there is an unargued assumption throughout that social patterns and institutions are more likely to revert to pre-industrial norms in the future.

I argue explicitly in some detail for some attitudes reverting to those more typical of poor farmers, when ems get poor. But the only institutions that might revert would be those driven mainly by attitudes, such as perhaps democracy.

Hanson .. erects a large edifice of sociological speculation on how the liberal use of em copies and backups will change attitudes to sex, law, death and pretty much everything else. But .. if someone announces they will upload my consciousness into a robot and then destroy my existing body, I will take this as a threat of murder. .. So ems – the first of whom are, by definition, going to have minds identical to those of humans – may very well exhibit the same kind of reaction, in which case a lot of Hanson’s more thrillingly bizarre social developments will not happen.

Yes, you feel strongly, but everyone need not share your feelings. Yes, the first brain scans will be destructive, but out of a world population of billions it only takes a few biological humans willing to be scanned this way to fill the em world. And if there were only a few of them, they’d each earn trillions.

But then, the rather underwhelming upshot of this project is that fast-living and super-clever ems will probably crack the problem of proper AI – actual intelligent machines – within a year or so of ordinary human time.

I didn’t say “probably” here; I gave that as one identifiable possibility.

Given that this future is so gloomy for just about everyone, one does end up wondering why Hanson wants to wake up in it – he reveals in the book that he has arranged to be cryogenically frozen on his death. I suppose it is at least possible that, one day, he could open his eyes and have the last laugh, as he surveys the appalling future he foresaw so long ago.

Because I describe a world you don’t like I must be a people hater pleased to see everyone suffer? Really?! For the record, I don’t now see the em world as appalling, and if I changed my mind on that upon seeing it up close, I’d be quite disappointed.

GD Star Rating
loading...
Tagged as: ,