Tag Archives: Future

Economic Singularity Review

The Economic Singularity: Artificial intelligence and the death of capitalism .. This new book from best-selling AI writer Calum Chace argues that within a few decades, most humans will not be able to work for money.

A strong claim! This book mentions me by name 15 times, especially on my review of Martin Ford’s Rise of the Robots, wherein I complain that Ford’s main evidence for saying “this time is different” is all the impressive demos he’s seen lately. Even though this was the main reason given in each previous automation boom for saying “this time is different.” This seems to be Chace’s main evidence as well:

Faster computers, the availability of large data sets, and the persistence of pioneering researchers have finally rendered [deep learning] effective this decade, leading to “all the impressive computing demos” referred to by Robin Hanson in chapter 3.3, along with some early applications. But the major applications are still waiting in the wings, poised to take the stage. ..

It’s time to answer the question: is it really different this time? Will machine intelligence automate most human jobs within the next few decades, and leave a large minority of people – perhaps a majority – unable to gain paid employment? It seems to me that you have to accept that this proposition is at least possible if you admit the following three premises: 1. It is possible to automate the cognitive and manual tasks that we carry out to do our jobs. 2. Machine intelligence is approaching or overtaking our ability to ingest, process and pass on data presented in visual form and in natural language. 3. Machine intelligence is improving at an exponential rate. This rate may or may not slow a little in the coming years, but it will continue to be very fast. No doubt it is still possible to reject one or more of these premises, but for me, the evidence assembled in this chapter makes that hard.

Well of course it is possible for this time to be different. But, um, why can’t these three statements have been true for centuries? It will eventually be possible to automate tasks, and we have been slowly but exponentially “approaching” that future point for centuries. And so we may still have centuries to go. As I recently explained, exponential tech growth is consistent with a relatively constant rate at which jobs are displaced by automation.

Chace makes a specific claim that seems to me quite wrong.

Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. .. Facebook has declared its ambition to make Hinton’s prediction come true. To this end, it established a basic research unit in 2013 called Facebook Artificial Intelligence Research (FAIR) with 50 employees, separate from the 100 people in its Applied Machine Learning team. So within a decade, machines are likely to be better than humans at recognising faces and other images, better at understanding and responding to human speech, and may even be possessed of common sense. And they will be getting faster and cheaper all the time. It is hard to believe that this will not have a profound impact on the job market.

I’ll give 50-1 odds against full human level common sense AI with a decade! Chace, I offer my $5,000 against your $100. Also happy to bet on “profound” job market impact, as I mentioned in my review of Ford. Chace, to his credit, sees value in such bets:

The economist Robin Hanson thinks that machines will eventually render most humans unemployed, but that it will not happen for many decades, probably centuries. Despite this scepticism, he proposes an interesting way to watch out for the eventuality: prediction markets. People make their best estimates when they have some skin in the forecasting game. Offering people the opportunity to bet real money on when they see their own jobs or other peoples’ jobs being automated may be an effective way to improve our forecasting.

Finally, Chace repeats Ford’s error in claiming economic collapse if median wages fall:

But as more and more people become unemployed, the consequent fall in demand will overtake the price reductions enabled by greater efficiency. Economic contraction is pretty much inevitable, and it will get so serious that something will have to be done. .. A modern developed society is not sustainable if a majority of its citizens are on the bread line.

Really, an economy can do fine if average demand is high and growing, even if median demand falls. It might be ethically lamentable, and the political system may have problems, but markets can do just fine.

GD Star Rating
loading...
Tagged as: ,

World Basic Income

Joseph said .. Let Pharaoh .. appoint officers over the land, and take up the fifth part of the land of Egypt in the seven plenteous years. .. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land perish not through the famine. And the thing was good in the eyes of Pharaoh. (Genesis 38)

[Medieval Europe] public authorities were doubly interested in the problem of food supplies; first, for humanitarian reasons and for good administration; second, for reasons of political stability because hunger was the most frequent cause of popular revolts and insurrections. In 1549 the Venetian officer Bernardo Navagero wrote to the Venetian senate: “I do not esteem that there is anything more important to the government of cities than this, namely the stocking of grains, because fortresses cannot be held if there are not victuals and because most revolts and seditions originate from hunger. (p42, Cipolla, Before the Industrial Revolution)

63% of Americans don’t have enough saved to cover even a $500 financial setback. (more)

Even in traditional societies with small governments, protecting citizens from starvation was considered a proper of role of the state. Both to improve welfare, and to prevent revolt. Today it could be more efficient if people used modern insurance institutions to protect themselves. But I can see many failing to do that, and so can see governments trying to insure their citizens against big disasters.

Of course rich nations today face little risk of famine. But as I discuss in my book, eventually when human level artificial intelligence (HLAI) can do almost all tasks cheaper, biological humans will lose pretty much all their jobs, and be forced to retire. While collectively humans will start out owning almost all the robot economy, and thus get rich fast, many individuals may own so little as to be at risk of starving, if not for individual or collective charity.

Yes, this sort of transition is a long way off; “this time isn’t different” yet. There may be centuries still to go. And if we first achieve HLAI via the relatively steady accumulation of better software, as we have been doing for seventy years, we may get plenty of warning about such a transition. However, if we instead first achieve HLAI via ems, as elaborated in my book, we may get much less warning; only five years might elapse between seeing visible effects and all jobs lost. Given how slowly our political systems typically changes state redistribution and insurance arrangements, it might be wiser to just set up a system far in advance that could deal with such problems if and when they appear. (A system also flexible enough to last over this long time scale.)

The ideal solution is global insurance. Buy insurance for citizens that pays off only when most biological humans lose their jobs, and have this insurance pay enough so these people don’t starve. Pay premiums well in advance, and use a stable insurance supplier with sufficient reinsurance. Don’t trust local assets to be sufficient to support local self-insurance; the economic gains from an HLAI economy may be very concentrated in a few dense cities of unknown locations.

Alas, political systems are even worse at preparing for problems that seem unlikely anytime soon. Which raises the question: should those who want to push for state HLAI insurance ally with folks focused on other issues? And that brings us to “universal basic income” (UBI), a topic in the news lately, and about which many have asked me in relation to my book.

Yes, there are many difficult issues with UBI, such as how strongly the public would favor it relative to traditional poverty programs, whether it would replace or add onto those other programs, and if replacing how much that could cut administrative costs and reduce poverty targeting. But in this post, I want to focus on how UBI might help to insure against job loss from relatively sudden unexpected HLAI.

Imagine a small “demonstration level” UBI, just big enough to one side to say “okay we started a UBI, now it is your turn to lower other poverty programs, before we raise UBI more.” Even such a small UBI might be enough to deal with HLAI, if its basic income level were tied to the average income level. After all, an HLAI economy could grow very fast, allowing very fast growth in the incomes that biological human gain from owning most of the capital in this new economy. Soon only a small fraction of that income could cover a low but starvation-averting UBI.

For example, a UBI set to x% of average income can be funded via a less than x% tax on all income over this UBI level. Since average US income per person is now $50K, a 10% version gives a UBI of $5K. While this might not let one live in an expensive city, a year ago I visited a 90-adult rural Virginia commune where this was actually their average income. Once freed from regulations, we might see more innovations like this in how to spend UBI.

However, I do see one big problem. Most UBI proposals are funded out of local general tax revenue, while the income of a HLAI economy might be quite unevenly distributed around the globe. The smaller the political unit considering a UBI, the worse this problem gets. Better insurance would come from a UBI that is funded out of a diversified global investment portfolio. But that isn’t usually how governments fund things. What to do?

A solution that occurs to me is to push for a World Basic Income (WBI). That is, try to create and grow a coalition of nations that implement a common basic income level, supported by a shared set of assets and contributions. I’m not sure how to set up the details, but citizens in any of these nations should get the same untaxed basic income, even if they face differing taxes on incomes above this level. And this alliance of nations would commit somehow to sharing some pool of assets and revenue to pay for this common basic income, so that everyone could expect to continue to receive their WBI even after an uneven disruptive HLAI revolution.

Yes, richer member nations of this alliance could achieve less local poverty reduction, as the shared WBI level couldn’t be above what the poor member nations could afford. But a common basic income should make it easier to let citizens move within this set of nations. You’d less have to worry about poor folks moving to your nation to take advantage of your poverty programs. And the more that poverty reduction were implemented via WBI, the bigger would be this advantage.

Yes, this seems a tall order, probably too tall. Probably nations won’t prepare, and will then respond to a HLAI transition slowly, and only with what ever resources they have at their disposal, which in some places will be too little. Which is why I recommend that individuals and smaller groups try to arrange their own assets, insurance, and sharing. Yes, it won’t be needed for a while, but if you wait until the signs of something big soon are clear, it might then be too late.

GD Star Rating
loading...
Tagged as: , ,

Star Trek As Fantasy

Frustrated that science fiction rarely makes economic sense, I just wrote a whole book trying to show how much consistent social detail one can offer, given key defining assumptions on a future scenario. Imagine my surprise then to learn that another book, Trekonomics, published exactly one day before mine, promises to make detailed economic sense out of the popular Star Trek shows. It seems endorsed by top economists Paul Krugman and Brad Delong, and has lots of MSM praise. From the jacket:

Manu Saadia takes a deep dive into the show’s most radical and provocative aspect: its detailed and consistent economic wisdom. .. looks at the hard economics that underpin the series’ ideal society.

Now Saadia does admit the space stuff is “hogwash”:

There will not be faster-than-light interstellar travel or matter-anti-matter reactors. Star Trek will not come to pass as seen on TV. .. There is no economic rationale for interstellar exploration, maned or unmanned. .. Settling a minuscule outpost on a faraway  world, sounds like complete idiocy. .. Interstellar exploration … cannot happen until society is so wealthy that not a single person has to waste his or her time on base economic pursuits. .. For a long while, there is no future but on Earth, in the cities of Earth. (pp. 215-221)

He says Trek is instead a sermon promoting social democracy: Continue reading "Star Trek As Fantasy" »

GD Star Rating
loading...
Tagged as: , ,

Lognormal Jobs

I often meet people who think that because computer tech is improving exponentially, its social impact must also be exponential. So as soon as we see any substantial social impact, watch out, because a tsunami is about to hit. But it is quite plausible to have exponential tech gains translate into only linear social impact. All we need is a lognormal distribution, as in this diagram:

LogNormalJobs

Imagine that each kind of jobs that humans do requires a particular level of computing power in order for computers to replace humans on that job. And imagine that these job power levels are distributed lognormally.

In this case an exponential growth in computing power will translate into a linear rate at which computers displace humans on jobs. Of course jobs may clump along this log-computing-power axis, giving rise to bursts and lulls in the rate at which computers displace jobs. But over the long run we could see a relatively steady rate of job displacement even with exponential tech gains. Which I’d say is roughly what we do see.

Added 3am: Many things are distributed lognormally.

GD Star Rating
loading...
Tagged as: , ,

The Labor-From-Factories Explosion

As I’ve discussed before, including in my book, the history of humanity so far can be roughly summarized as a sequence of three exponential growth modes: foragers with culture started a few million years ago, farming started about ten thousand years ago, and industry starting a few hundred years ago. Doubling times got progressively shorter: a quarter million years, then a millennia, and now fifteen years. Each time the transition lasted less than a previously doubling time, and roughly similar numbers of humans have lived during each era.

Before humans, animal brains brains grew exponentially, but even more slowly, doubling about every thirty million years, starting about a half billion years ago. And before that, genomes seem to have doubled exponentially about every half billion years, starting about ten billion years ago.

What if the number of doublings in the current mode, and in the mode that follows it, are comparable to the number of doublings in the last few modes? What if the sharpness of the next transition is comparable to the sharpness if the last few transitions, and what if the factor by which the doubling time changes next time is comparable to the last few factors. Given these assumptions, the next transition will happen sometime in roughly the next century. Within a period of five years, the economy will be doubling every month or faster. And that new mode will only last a year or so before something else changes.

To summarize, usually in history we see relatively steady exponential growth. But five times so far, steady growth has been disturbed by a rapid transition to a much faster rate of growth. It isn’t crazy to think that this might happen again.

Plausibly, new faster exponential modes appear when a feedback loop that was previously limited and blocked becomes is unlocked and strong. And so one way to think about what might cause the next faster mode after ours is to look for plausible feedback loops. However, if there thousands of possible factors that matter for growth and progress, then there are literally millions of possible feedback loops.

For example, denser cities should innovate more, and more innovation can find better ways to make buildings taller, and thus increase city density. More better tutorial videos make it easier to learn varied skills, and some of those skills help to make more better tutorial videos. We can go all day making up stories like these.

But as we have only ever seen maybe five of these transitions in all of history, powerful feedback loops whose unlocking causes a huge growth rate jump must be extremely rare. The vast majority of feedback loops do not create such a huge jump when unlocked. So just because you can imagine a currently locked feedback loop does not make unlocking it likely to cause the next great change.

Many people lately have fixated on one particular possible feedback loop: an “intelligence explosion.”  The more intelligence a creature is, the more it is able to change creatures like itself to become more intelligent. But if you mean something more specific than “mental goodness” by “intelligence”, then this remains only one of thousands of possibilities. So you need strong additional arguments to see this feedback loop as more likely than all the others. And the mere fact that you can imagine this feedback being positive is not remotely enough.

It turns out that we already know of an upcoming transition of a magnitude similar to the previous transitions, scheduled to arrive roughly when prior trends led us to expect a new transition. This explosion is due to labor-from-factories.

Today we can grow physical capital very fast in factories, usually doubling capital on a scale ranging from a few weeks to a few months, but we grow human workers much more slowly. Since capital isn’t useful without more workers, we are forced to grow today mainly via innovation. But if in the future we find a way to make substitutes for almost all human workers in factories, the economy can grow much faster. This is called an AK model, and standard growth theory says it is plausible that this could let the economy double every month or so.

So if it is plausible that artificial intelligence as capable as humans will appear in the next century or so, then we already know what will cause the next great jump to a faster growth mode. Unless of course some other rare powerful feedback loop is unlocked before then. But if an intelligence explosion isn’t  possible until you have machines at least as smart as humans, then that scenario won’t happen until after labor-from-factories. And even then it is far from obvious that feedback can cause one of the few rare big growth rate jumps.

GD Star Rating
loading...
Tagged as: , , ,

Missing Engagement

On the surface, there seems to have been a big debate over the last few years on how fast automation will displace jobs over the next decade or so. Some have claimed very rapid displacement, much faster than we’ve seen in recent decades (or centuries). Others have been skeptical (like me here, here, here, and here).

On October 13, David Mindell, Professor at MIT of both Aeronautics and Astronautics, and also History of Engineering and Manufacturing weighed in on this debate, publishing Our Robots, Ourselves: Robotics and the Myths of Autonomy:

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation. ..

“There’s an idea that progress in robotics leads to full autonomy. That may be a valuable idea to guide research … but when automated and autonomous systems get into the real world, that’s not the direction they head. We need to rethink the notion of progress, not as progress toward full autonomy, but as progress toward trusted, transparent, reliable, safe autonomy that is fully interactive: The car does what I want it to do, and only when I want it to do it.” (more)

In his book, Mindell expertly supports his position with a detailed review of the history of automation in planes, spacecraft and submarines. You might think than Mindell’s prestige, expertise, and detailed book on past automation rates and patterns would earn him a place in this debate on future rates of automation progress. Many of those who blurbed the book clearly think so:

“Mindell’s ingenious and profoundly original book will enlighten those who prophesy that robots will soon make us redundant.”—David Autor

“My thanks to the author for bringing scholarship and sanity to a debate which has run off into a magic la-la land in the popular press.”—Rodney Brooks

But looking over dozens of reviews Mindell’s book in the 75 days since it was published, I find no thoughtful response from the other side! None. No one who expects rapid automation progress has bothered to even outline why they find Mindell’s arguments unpersuasive.

Perhaps this shows that people on the other side know Mindell’s arguments to be solid, making any response unpersuasive, and so they’d rather ignore him. Maybe they just don’t think the past is any guide to the future, at least in automation, making Mindell’s discussion of the past irrelevant to the debate. I’ve known people who think this way.

But perhaps a more plausible interpretation is that on subjects like this in our intellectual world, usually there just is no “debate”; there are just different sides who separately market their points of view. Just as in ordinary marketing, where firms usually pitch their products without mentioning competing products, intellectuals marketing of points of view also usually ignore competing points of view. Instead of pointing out contrary arguments and rebutting them, intellectual usually prefer to ignore contrary arguments.

This seems a sad state of affairs with respect to intellectual progress. But of course such progress is a public good, where individual contributions must trade a personal cost against a collective benefit, encouraging each of us to free-ride on the efforts of others. We might create intellectual institutions that better encourage more engagement with and response to contrary arguments, but unless these are global institutions others may prefer to free-ride and not contribute to local institutions.

You might think that academic norms of discourse are such global institutions encouraging engagement. And academics do give much lip service to that idea. But in fact it is mostly empty talk; academics don’t actually encourage much engagement and response beyond the narrow scope of prestigious folks in the same academic discipline.

GD Star Rating
loading...
Tagged as: , , ,

The Future of Language

More from Henrich’s The Secret Of Our Success:

Linguists and linguistic anthropologists .. have often assumed that all languages are more or less equal, along all the dimensions that we might care about – equally learnable, efficient, and expressive. .. Recently .. cracks in these intellectual barricades have begun to multiply. .. Like [other kinds of cultural] toolkits, the size and interconnectedness of populations favors culturally evolving and sustaining larger vocabularies, more phonemes, shorter words, and certain kinds of more complex grammatical tools, like subordinating conjunctions. (p. 233, 259)

The most ancient languages we know of are visibly impoverished compared to modern languages today. It just takes longer to say similar complex things in those languages. Assuming that the size and interconnectedness of populations speaking the main languages continues to increase into the future (as they do in my em scenario), we can make some obvious predictions about future languages.

Future languages should make more distinctions such as between colors, and have larger vocabularies, more phonemes, and shorter words. They should also have more grammatical tools such as adjectives, tenses, prepositions, pronouns, and subordinating conjunctions. Technology to assist us in more clearly hearing the words that others speak should also push to increase the number of phonemes, and thus shorten future words.

For obvious reasons, science fiction almost always fails to show these features of future language.

If you search for “future of language” you’ll find many articles noting that the world is losing many unpopular languages, and speculating on which of today’s languages will be the most popular later. And this creative attempt to guess specific changes. But oddly I can’t find any articles that discuss the basic trends I mention above.

GD Star Rating
loading...
Tagged as: ,

Tax Coastal Cities?

(Nobel-winner) Thomas Schelling just gave a talk here at GMU Econ on “Two Major Infrastructure Worldwide Projects to Prepare for Global Warming.” He said most work on global warming focuses on how to prevent it, and that there’s been a bit of a taboo on looking at how to mitigate harm if it happens.

He defied that taboo, and talked about two harms from global warming: 1) crop drought due to snowpacks melting earlier in the annual cycle, and 2) sea levels rising if the Greenland or Antarctic ice sheets suddenly slip into the sea. For both problems Schelling wants central governments to start planning possible large engineering projects.

On overly-early farm-water, he wants new canals and reservoirs dug to hold water until farmers want it and then deliver that water to them. For rising sea levels he wants dikes etc. to keep coastal cities dry. Such city protection systems could be at the scale of the harbor of a single city, or at the scale of blocking the Strait of Gibraltar to protect the entire Mediterranean Sea.

On protecting coastal cities, John Nye pointed out that if governments are willing to do anything now they should consider taxing coastal cities to collect revenue to pay for future mitigation. This has the further big benefit of discouraging risky coastal development. And if governments aren’t willing to do this obvious easy thing now, what hope is there of them doing much useful later?

Most of the coastal city structures that would be hurt via rising sea levels probably haven’t been built yet. So trying to get governments to start planning to protect coastal cities runs the risk of encouraging too much coastal development, which then becomes insufficiently protected or protected at excess expense.

The fact that central governments are not coordinating much to reduce global warming suggests that they will also fail to coordinate at large scales to mitigate harm from warming. So a simpler safer solution might be to have central governments try to commit to not protect coastal cities in advance. Don’t even start central government initiatives to coordinate and plan for coastal protection, and stop current central government coastal protection programs, such as subsidized hurricane insurance.

If coastal cities want to tax themselves to pay for their own local mitigation, fine, but to the extent we expect that more central governments won’t be able to resist helping later, have them tax low-lying coastal development in advance to pay for that. Let everyone know its time to start focusing new development away from low coasts.

The problem of building reservoirs for farmers seems more easily dealt with via private property in water. If private parties can pay to dig reservoirs to sell water to private farmers at market prices, it isn’t clear why much central government coordination is required.

Added: Seems Glenn Reynolds proposed to tax coastal development a month ago. HT Robert Koslover in the first comment below.

GD Star Rating
loading...
Tagged as: , ,

How Plastic Are Values?

I thought I understood cultural evolution. But in his new book, The Secret Of Our Success: How Culture Is Driving Human Evolution, Domesticating Our Species, and Making Us Smarter, Joseph Henrich schooled me. I felt like I learned more from his book than from the last dozen books I’ve read. For example, on the cultural plasticity of pleasure and pain:

Chili peppers were the primary spice of New World cuisines prior to the arrival of Europeans and are now routinely consumed by about a quarter of all adults globally. Chili peppers have evolved chemical defenses, based on capsaicin, that make them aversive to mammals and rodents but desirable to birds. In mammals, capsicum directly activates a pain channel (TrpV1), which creates a burning sensation in response to various specific stimuli, including aside, high temperatures, and allyl isothiocyanate (which is found in mustard and wasabi). These chemical weapons aid chili pepper plants .. because birds provide a better dispersal system for the plants’ seeds. .. People come to enjoy the experience of eating chili peppers mostly by reinterpreting the pain signals caused by capsicum as pleasure or excitement. .. Children acquire this preference gradually, without being pressured or compelled. They want to learn to like chili peppers, to be like those they admire. .. Culture can overpower our innate mammalian aversions when necessary and without us knowing it. ..

Runners like me enjoy running, but normal people think running is painful and something to be avoided. Similarly weight lifters love that muscle soreness they get after a good workout. .. Experimental work shows that believing a pain-inducing treatment “helps” one’s muscles activates our opioid and/or our cannabinoid systems, which suppress the pain and increase out pain tolerance. ..

Those who saw the tough model [who reported lower pain ratings] showed (1) .. bodies stopped reacting to the threat, (2) lower and more stable heart rates, and (3) lower stress ratings. Cultural learning from the tough model changed their physiological reactions to electric shocks.

Henrich’s basic story is that from a very early age we look to see who around us who other people are looking at, and we they try to copy everything about those high prestige folks, including their values and preferences. In his words:

Humans are adaptive cultural learners who acquire ideas, beliefs, values, social norms, motivations, and worldview from others in their communities. To focus our cultural learning, we use cues of prestige, success, sex, dialect, and ethnicity, among others, and especially attend to particular domains, such as those involving food, sex, danger, and norm violations. .. Humans are status seekers and aware strongly influence by prestige. But what’s highly flexible is which behaviors or actions lead to high prestige. …The social norms we acquire often come with internalized motivations and ways of viewing the world (guiding our attention and memory), as well as with standards for judging and punishing others. People’s preferences and motivations are not fixed.

The examples above show cultural influence can greatly change the intensity of pain and pleasure, and even flip pain into pleasure, and vice versa. Though the book doesn’t mention it, we see similar effects regarding sex – some people come to see pain as pleasure, and others see pleasure as pain.

All of this suggests that human preferences are surprisingly plastic. Not completely plastic mind you, but still, we have a big capacity to change what we see as pleasure or pain, as desirable or undesirable. Yes we usually can’t just individually will ourselves to love what we hated a few hours ago. But the net effect of all our experience over a lifetime is huge.

It seems that this should make us worry less that future folks will be happy. Even if it seems that future folks will have to do or experience things that we today would find unpleasant, future culture could change people so that they find these new things pleasant instead. Yes, if change happens very fast it might take culture time to adapt, and there could be a lot of unhappy people during the transition. And yes there are probably limits beyond which culture can’t make us like things. But within a wide range of actions and experiences, future folks can learn to like whatever it is that their world requires.

GD Star Rating
loading...
Tagged as: , ,

Science Fiction Is Fantasy

Why do people like fantasy novels? One obvious explanation is that “magic” relaxes the usual constraints on the stories one can tell. Story-tellers can either use this freedom to explore a wider range of possible worlds, and so feed reader hungers for variety and strangeness, or they can focus repeatedly on particular story settings that seem ideal places for telling engaging stories, settings that are just not feasible without magic.

It is widely acknowledged that science fiction is by far the closest literary genre to fantasy. One plausible explanation for this is that future technology serves the same function in science fiction that magic serves in fantasy: it can be an “anything goes” sauce to escape the usual story constraints. So future tech can either let story tellers explore a wider space of strangeness, or return repeatedly to settings that feel particularly attractive, and are infeasible without future tech.

Of course it might be that some readers actually care about the real future, and want to hear stories set in that real future. But the overwhelming levels of implausible unrealism I find in almost all science fiction (and fantasy) suggest that this is a negligible fraction of readers, a faction writers rarely specialize in targeting. Oh writers will try to add a gloss of realism to the extent that it doesn’t cost them much in terms of other key story criteria. But when there are conflicts, other criteria win.

My forthcoming book The Age of Em, tries to describe a realistic future setting in great detail. I expect some of those who use science fiction in order to consume strange variety will enjoy the strangeness of my scenario, at least if they can get over the fact that it doesn’t come packaged with plot and characters. But they are unlikely to want to return to that setting repeatedly, as it just can’t compete with places designed to be especially compelling for stories. My setting is designed to be realistic, and I’ll just have to see how many readers I can attract to that unusual feature.

GD Star Rating
loading...
Tagged as: ,