Tag Archives: Future

Lognormal Jobs

I often meet people who think that because computer tech is improving exponentially, its social impact must also be exponential. So as soon as we see any substantial social impact, watch out, because a tsunami is about to hit. But it is quite plausible to have exponential tech gains translate into only linear social impact. All we need is a lognormal distribution, as in this diagram:


Imagine that each kind of jobs that humans do requires a particular level of computing power in order for computers to replace humans on that job. And imagine that these job power levels are distributed lognormally.

In this case an exponential growth in computing power will translate into a linear rate at which computers displace humans on jobs. Of course jobs may clump along this log-computing-power axis, giving rise to bursts and lulls in the rate at which computers displace jobs. But over the long run we could see a relatively steady rate of job displacement even with exponential tech gains. Which I’d say is roughly what we do see.

Added 3am: Many things are distributed lognormally.

GD Star Rating
Tagged as: , ,

The Labor-From-Factories Explosion

As I’ve discussed before, including in my book, the history of humanity so far can be roughly summarized as a sequence of three exponential growth modes: foragers with culture started a few million years ago, farming started about ten thousand years ago, and industry starting a few hundred years ago. Doubling times got progressively shorter: a quarter million years, then a millennia, and now fifteen years. Each time the transition lasted less than a previously doubling time, and roughly similar numbers of humans have lived during each era.

Before humans, animal brains brains grew exponentially, but even more slowly, doubling about every thirty million years, starting about a half billion years ago. And before that, genomes seem to have doubled exponentially about every half billion years, starting about ten billion years ago.

What if the number of doublings in the current mode, and in the mode that follows it, are comparable to the number of doublings in the last few modes? What if the sharpness of the next transition is comparable to the sharpness if the last few transitions, and what if the factor by which the doubling time changes next time is comparable to the last few factors. Given these assumptions, the next transition will happen sometime in roughly the next century. Within a period of five years, the economy will be doubling every month or faster. And that new mode will only last a year or so before something else changes.

To summarize, usually in history we see relatively steady exponential growth. But five times so far, steady growth has been disturbed by a rapid transition to a much faster rate of growth. It isn’t crazy to think that this might happen again.

Plausibly, new faster exponential modes appear when a feedback loop that was previously limited and blocked becomes is unlocked and strong. And so one way to think about what might cause the next faster mode after ours is to look for plausible feedback loops. However, if there thousands of possible factors that matter for growth and progress, then there are literally millions of possible feedback loops.

For example, denser cities should innovate more, and more innovation can find better ways to make buildings taller, and thus increase city density. More better tutorial videos make it easier to learn varied skills, and some of those skills help to make more better tutorial videos. We can go all day making up stories like these.

But as we have only ever seen maybe five of these transitions in all of history, powerful feedback loops whose unlocking causes a huge growth rate jump must be extremely rare. The vast majority of feedback loops do not create such a huge jump when unlocked. So just because you can imagine a currently locked feedback loop does not make unlocking it likely to cause the next great change.

Many people lately have fixated on one particular possible feedback loop: an “intelligence explosion.”  The more intelligence a creature is, the more it is able to change creatures like itself to become more intelligent. But if you mean something more specific than “mental goodness” by “intelligence”, then this remains only one of thousands of possibilities. So you need strong additional arguments to see this feedback loop as more likely than all the others. And the mere fact that you can imagine this feedback being positive is not remotely enough.

It turns out that we already know of an upcoming transition of a magnitude similar to the previous transitions, scheduled to arrive roughly when prior trends led us to expect a new transition. This explosion is due to labor-from-factories.

Today we can grow physical capital very fast in factories, usually doubling capital on a scale ranging from a few weeks to a few months, but we grow human workers much more slowly. Since capital isn’t useful without more workers, we are forced to grow today mainly via innovation. But if in the future we find a way to make substitutes for almost all human workers in factories, the economy can grow much faster. This is called an AK model, and standard growth theory says it is plausible that this could let the economy double every month or so.

So if it is plausible that artificial intelligence as capable as humans will appear in the next century or so, then we already know what will cause the next great jump to a faster growth mode. Unless of course some other rare powerful feedback loop is unlocked before then. But if an intelligence explosion isn’t  possible until you have machines at least as smart as humans, then that scenario won’t happen until after labor-from-factories. And even then it is far from obvious that feedback can cause one of the few rare big growth rate jumps.

GD Star Rating
Tagged as: , , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
Tagged as: , , ,

The Future of Language

More from Henrich’s The Secret Of Our Success:

Linguists and linguistic anthropologists .. have often assumed that all languages are more or less equal, along all the dimensions that we might care about – equally learnable, efficient, and expressive. .. Recently .. cracks in these intellectual barricades have begun to multiply. .. Like [other kinds of cultural] toolkits, the size and interconnectedness of populations favors culturally evolving and sustaining larger vocabularies, more phonemes, shorter words, and certain kinds of more complex grammatical tools, like subordinating conjunctions. (p. 233, 259)

The most ancient languages we know of are visibly impoverished compared to modern languages today. It just takes longer to say similar complex things in those languages. Assuming that the size and interconnectedness of populations speaking the main languages continues to increase into the future (as they do in my em scenario), we can make some obvious predictions about future languages.

Future languages should make more distinctions such as between colors, and have larger vocabularies, more phonemes, and shorter words. They should also have more grammatical tools such as adjectives, tenses, prepositions, pronouns, and subordinating conjunctions. Technology to assist us in more clearly hearing the words that others speak should also push to increase the number of phonemes, and thus shorten future words.

For obvious reasons, science fiction almost always fails to show these features of future language.

If you search for “future of language” you’ll find many articles noting that the world is losing many unpopular languages, and speculating on which of today’s languages will be the most popular later. And this creative attempt to guess specific changes. But oddly I can’t find any articles that discuss the basic trends I mention above.

GD Star Rating
Tagged as: ,

Tax Coastal Cities?

(Nobel-winner) Thomas Schelling just gave a talk here at GMU Econ on “Two Major Infrastructure Worldwide Projects to Prepare for Global Warming.” He said most work on global warming focuses on how to prevent it, and that there’s been a bit of a taboo on looking at how to mitigate harm if it happens.

He defied that taboo, and talked about two harms from global warming: 1) crop drought due to snowpacks melting earlier in the annual cycle, and 2) sea levels rising if the Greenland or Antarctic ice sheets suddenly slip into the sea. For both problems Schelling wants central governments to start planning possible large engineering projects.

On overly-early farm-water, he wants new canals and reservoirs dug to hold water until farmers want it and then deliver that water to them. For rising sea levels he wants dikes etc. to keep coastal cities dry. Such city protection systems could be at the scale of the harbor of a single city, or at the scale of blocking the Strait of Gibraltar to protect the entire Mediterranean Sea.

On protecting coastal cities, John Nye pointed out that if governments are willing to do anything now they should consider taxing coastal cities to collect revenue to pay for future mitigation. This has the further big benefit of discouraging risky coastal development. And if governments aren’t willing to do this obvious easy thing now, what hope is there of them doing much useful later?

Most of the coastal city structures that would be hurt via rising sea levels probably haven’t been built yet. So trying to get governments to start planning to protect coastal cities runs the risk of encouraging too much coastal development, which then becomes insufficiently protected or protected at excess expense.

The fact that central governments are not coordinating much to reduce global warming suggests that they will also fail to coordinate at large scales to mitigate harm from warming. So a simpler safer solution might be to have central governments try to commit to not protect coastal cities in advance. Don’t even start central government initiatives to coordinate and plan for coastal protection, and stop current central government coastal protection programs, such as subsidized hurricane insurance.

If coastal cities want to tax themselves to pay for their own local mitigation, fine, but to the extent we expect that more central governments won’t be able to resist helping later, have them tax low-lying coastal development in advance to pay for that. Let everyone know its time to start focusing new development away from low coasts.

The problem of building reservoirs for farmers seems more easily dealt with via private property in water. If private parties can pay to dig reservoirs to sell water to private farmers at market prices, it isn’t clear why much central government coordination is required.

Added: Seems Glenn Reynolds proposed to tax coastal development a month ago. HT Robert Koslover in the first comment below.

GD Star Rating
Tagged as: , ,

How Plastic Are Values?

I thought I understood cultural evolution. But in his new book, The Secret Of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Joseph Henrich schooled me. I felt like I learned more from his book than from the last dozen books I’ve read. For example, on the cultural plasticity of pleasure and pain:

Chili peppers were the primary spice of New World cuisines prior to the arrival of Europeans and are now routinely consumed by about a quarter of all adults globally. Chili peppers have evolved chemical defenses, based on capsaicin, that make them aversive to mammals and rodents but desirable to birds. In mammals, capsicum directly activates a pain channel (TrpV1), which creates a burning sensation in response to various specific stimuli, including aside, high temperatures, and allyl isothiocyanate (which is found in mustard and wasabi). These chemical weapons aid chili pepper plants .. because birds provide a better dispersal system for the plants’ seeds. .. People come to enjoy the experience of eating chili peppers mostly by reinterpreting the pain signals caused by capsicum as pleasure or excitement. .. Children acquire this preference gradually, without being pressured or compelled. They want to learn to like chili peppers, to be like those they admire. .. Culture can overpower our innate mammalian aversions when necessary and without us knowing it. ..

Runners like me enjoy running, but normal people think running is painful and something to be avoided. Similarly weight lifters love that muscle soreness they get after a good workout. .. Experimental work shows that believing a pain-inducing treatment “helps” one’s muscles activates our opioid and/or our cannabinoid systems, which suppress the pain and increase out pain tolerance. ..

Those who saw the tough model [who reported lower pain ratings] showed (1) .. bodies stopped reacting to the threat, (2) lower and more stable heart rates, and (3) lower stress ratings. Cultural learning from the tough model changed their physiological reactions to electric shocks.

Henrich’s basic story is that from a very early age we look to see who around us who other people are looking at, and we they try to copy everything about those high prestige folks, including their values and preferences. In his words:

Humans are adaptive cultural learners who acquire ideas, beliefs, values, social norms, motivations, and worldview from others in their communities. To focus our cultural learning, we use cues of prestige, success, sex, dialect, and ethnicity, among others, and especially attend to particular domains, such as those involving food, sex, danger, and norm violations. .. Humans are status seekers and aware strongly influence by prestige. But what’s highly flexible is which behaviors or actions lead to high prestige. …The social norms we acquire often come with internalized motivations and ways of viewing the world (guiding our attention and memory), as well as with standards for judging and punishing others. People’s preferences and motivations are not fixed.

The examples above show cultural influence can greatly change the intensity of pain and pleasure, and even flip pain into pleasure, and vice versa. Though the book doesn’t mention it, we see similar effects regarding sex – some people come to see pain as pleasure, and others see pleasure as pain.

All of this suggests that human preferences are surprisingly plastic. Not completely plastic mind you, but still, we have a big capacity to change what we see as pleasure or pain, as desirable or undesirable. Yes we usually can’t just individually will ourselves to love what we hated a few hours ago. But the net effect of all our experience over a lifetime is huge.

It seems that this should make us worry less that future folks will be happy. Even if it seems that future folks will have to do or experience things that we today would find unpleasant, future culture could change people so that they find these new things pleasant instead. Yes, if change happens very fast it might take culture time to adapt, and there could be a lot of unhappy people during the transition. And yes there are probably limits beyond which culture can’t make us like things. But within a wide range of actions and experiences, future folks can learn to like whatever it is that their world requires.

GD Star Rating
Tagged as: , ,

Science Fiction Is Fantasy

Why do people like fantasy novels? One obvious explanation is that “magic” relaxes the usual constraints on the stories one can tell. Story-tellers can either use this freedom to explore a wider range of possible worlds, and so feed reader hungers for variety and strangeness, or they can focus repeatedly on particular story settings that seem ideal places for telling engaging stories, settings that are just not feasible without magic.

It is widely acknowledged that science fiction is by far the closest literary genre to fantasy. One plausible explanation for this is that future technology serves the same function in science fiction that magic serves in fantasy: it can be an “anything goes” sauce to escape the usual story constraints. So future tech can either let story tellers explore a wider space of strangeness, or return repeatedly to settings that feel particularly attractive, and are infeasible without future tech.

Of course it might be that some readers actually care about the real future, and want to hear stories set in that real future. But the overwhelming levels of implausible unrealism I find in almost all science fiction (and fantasy) suggest that this is a negligible fraction of readers, a faction writers rarely specialize in targeting. Oh writers will try to add a gloss of realism to the extent that it doesn’t cost them much in terms of other key story criteria. But when there are conflicts, other criteria win.

My forthcoming book The Age of Em, tries to describe a realistic future setting in great detail. I expect some of those who use science fiction in order to consume strange variety will enjoy the strangeness of my scenario, at least if they can get over the fact that it doesn’t come packaged with plot and characters. But they are unlikely to want to return to that setting repeatedly, as it just can’t compete with places designed to be especially compelling for stories. My setting is designed to be realistic, and I’ll just have to see how many readers I can attract to that unusual feature.

GD Star Rating
Tagged as: ,

Investors Not Barking

Detective: “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Detective: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

We’ve seen several centuries of continuing economic growth enabled by improving tech (broadly conceived). Some of that tech can be seen as “automation” where machines displace humans on valued tasks.

The economy has consistently found new tasks for humans, to make up for displaced tasks. But while the rate of overall economic growth has be relatively steady, we have seen fluctuations in the degree of automation displacement in any given industry and region. This has often led to local anxiety about whether we are seeing the start of a big trend deviation – are machines about to suddenly take over most human jobs fast?

Of course so far such fears have not yet been realized. But around the year 2000, near the peak of the dotcom tech boom, we arguably did see substantial evidence of investors suspecting a big trend-deviating disruption. During a big burst of computer-assisted task displacement, the tech sector should soon see a big increase in revenue. So anticipating a substantial chance of such a burst justifies bigger stock values for related firms. And this graph of the sector breakdown of the S&P500 over the last few decades shows that investors then put their money where their mouths were regarding such a possible big burst:


In the last few years, we’ve heard another burst of anxiety about an upcoming big burst of automation displacing humans on tasks. It is one of our anxieties du jour. But if you look at the right side of the graph above you’ll note that are not now seeing a boom in the relative value of tech sector stocks.

We see the same signal if we look at majors chosen by college graduates. A big burst of automation not only justifies bigger tech stock values, it also justifies more students majoring in tech. And during the dotcom boom we did see a big increase in students choosing to major in computer science. But we have not seen such an increase during the last decade.

So the actions of both stock investors and college students suggest that they do not believe we are at substantial risk of a big burst of automation soon. These dogs are not barking. Even if robots taking jobs is what lots of talking heads are talking about. Because talking heads aren’t putting their money, or their time, where their mouths are.

GD Star Rating
Tagged as: , ,

Assimilated Futures

I’ve long said that it is backwards to worry that technology will change faster than society can adapt, because the ability of society adapt is one of the main constraints on how fast we adopt new technologies. This insightful 2012 post by Venkatesh Rao elaborates on a related theme:

Both science fiction and futurism … fail to capture the way we don’t seem to notice when the future actually arrives. … The future always seems like something that is going to happen rather than something that is happening. …

Futurists, artists and edge-culturists … like to pretend that they are the lonely, brave guardians of the species who deal with the “real” future and pre-digest it for the rest of us. But … the cultural edge is just as frozen in time as the mainstream, … people who seek more stimulation than the mainstream, and draw on imagined futures to feed their cravings rather than inform actual future-manufacturing. …

When you are sitting on a typical modern jetliner, you are traveling at 500 mph in an aluminum tube that is actually capable of some pretty scary acrobatics. … Yet a typical air traveler never experiences anything that one of our ancestors could not experience on a fast chariot or a boat. Air travel is manufactured normalcy. …

This suggests that only those futures arrive for which there is human capacity to cope. This conclusion is not true, because a future can arrive before humans figure out whether they have the ability to cope. For instance, the widespread problem of obesity suggests that food-abundance arrived before we figured out that most of us cannot cope. And this is one piece of the future that cannot be relegated to specialists. …

Successful products are precisely those that do not attempt to move user experiences significantly, even if the underlying technology has shifted radically. In fact the whole point of user experience design is to manufacture the necessary normalcy for a product to succeed and get integrated. … What we get is a Darwinian weeding out of those manifestations of the future that break the continuity of technological experience. …

What about edge-culturists who think they are more alive to the real oncoming future? … The edge today looks strangely similar to the edge in any previous century. It is defined by reactionary musical and sartorial tastes and being a little more outrageous than everybody else in challenging the prevailing culture of manners. … If it reveals anything about technology or the future, it is mostly by accident. . …

At a more human level, I find that I am unable to relate to people who are deeply into any sort of cyberculture or other future-obsessed edge zone. There is a certain extreme banality to my thoughts when I think about the future. Futurists as a subculture seem to organize their lives as future-experience theaters. These theaters are perhaps entertaining and interesting in their own right, as a sort of performance art, but are not of much interest or value to people who are interested in the future in the form it might arrive in, for all.

It is easy to make the distinction explicit. Most futurists are interested in the future beyond the [manufactured normalcy field]. I am primarily interested in the future once it enters the Field, and the process by which it gets integrated into it. This is also where the future turns into money, so perhaps my motivations are less intellectual than they are narrowly mercenary. …

This also explains why so few futurists make any money. They are attracted to exactly those parts of the future that are worth very little. They find visions of changed human behavior stimulating. Technological change serves as a basis for constructing aspirational visions of changed humanity. Unfortunately, technological change actually arrives in ways that leave human behavior minimally altered. .. The mainstream never ends up looking like the edge of today. Not even close. The mainstream seeks placidity while the edge seeks stimulation. (more)

Yes, I’m a guilty-as-charged futurist focused on changes far enough distant that there’s little money to be made understanding them now. But I share Rao’s emotional distance from the future-obsessed cultural edge. I want to understand the future not as morality tale to validate my complaints against today’s dominant culture; I instead want to foresee the assimilated future. That is, I want to see how future people will actually see their own world, after they’ve found ways to see it banally as a minimal change from the past.

Cultural futurists have complained that the future I describe in my upcoming book The Age of Em is too conservative in presuming the continuation of supply and demand, inequality, big organizations, status seeking, and so on. Don’t I know that tech will change everything, and soon? No, actually I don’t know that.

Added: To be clear, eventually fundamentals may well change. But the rate of such changes is low enough that in a medium term future most fundamental features probably haven’t changed yet.

GD Star Rating
Tagged as:

Max & Miller’s Mate

Geoffrey Miller’s book The Mating Mind was very influential on me, and so I spent several posts on his book Spent. He has a new book out, coauthored with Tucker Max, called Mate: become the man women want. It is a how-to book, on how men can attract women.

The book’s voice is less academic and more like a drill sergeant — stern older men giving harsh but needed instructions to younger men. They don’t mind using some crude language, and they don’t argue much for their claims, expecting readers to accept what they say on authority. Fortunately, most of what they say seems to be pretty well-grounded in the literature.

The world view they present has mating quite thoroughly infused with signaling. Pretty much everything you do with actual or potential mates is used as a reliable signal of your hidden features. Makes me wonder in what other self-help books it would be okay to present as strong a signaling view. Perhaps there are career advice books that infuse signaling as throughly into their view of the work world. But I expect people wouldn’t tolerate advice books on school, religion, arts, and charity that are this signaling heavy. Even if the advice was solid.

Though heavy on signaling, Max & Miller don’t consider self-deception. They talk simply about men just looking inside themselves to see what they want, and tell men to take what women seem to want at face value. But perhaps talking about self-deception to their target audience (young men who feel they are failing at mating) would just confuse more than help.

At several points Max & Miller warn their readers that women never evolved general ways to see and appreciate things like wealth and intelligence; women instead evolved to appreciate more specific signals like nice clothes and wit. So don’t go trying to show off your IQ score or bank balance.

They don’t advise women to fix this oversight, but instead advise men to fix how they show off. I suspect the idea is that humans are just more general and flexible on how to achieve their goals than on what exactly are their goals. And I suspect this is right. While one can imagine a creature that just wants “whatever helps me have many descendants”, humans are just not those creatures.

Two suggestive implication follow from this fact. First, if descendants of humans are ever blocked in their growth or expansion into the universe due to their failing to be sufficiently flexible or general, that failing will more likely come from their preferences, rather than their engineering or science. Second, as human incomes fall toward subsistence, our primary preferences for survival trump others, inducing effectively more general and flexible preferences. So subsistence income descendants have a better chance of avoiding generality failures.

GD Star Rating
Tagged as: , ,