Better Babblers

You can think of knowing how to write as knowing how to correlate words. Given no words, what first word should you write. Then given one word, what second word best correlates with that. Then given two words, what third word best fits with those two. And so on. Thus your knowledge of how to write can be broken into what you know at these different correlation orders: one word, two words, three words, and so on. Each time you pick a new word you can combine knowledge at these different orders, by weighing all their different recommendations for your next word.

This correlation order approach can also be applied at different scales. For example, given some classification of your first sentence, what kind of second sentence should follow? Given a classification of your first chapter, what kind of second chapter should follow? Many other kinds of knowledge can be similarly broken down into correlation orders, at different scales. We can do this for music, paintings, interior decoration, computer programs, math theorems, and so on.

Given a huge database, such as of writings, it is easy to get good at very low orders; you can just use the correlation frequencies found in your dataset. After that, simple statistical models applied to this database can give you good estimates for correlations to use at somewhat higher orders. And if you have enough data (roughly ten million examples per category I’m told) then recently popular machine learning techniques can improve your estimates at a next set of higher orders.

There are some cases where this is enough; either you can get enormous amounts of data, or learning low order correlations well is enough to solve your problem. These cases include many games with well defined rules, many physical tasks where exact simulations are feasible, and some kinds of language translation. But there are still many other cases where this is far from enough to achieve human level proficiency. In these cases an important part of what we know can be described as very high order correlations produced by “deep” knowledge structures that aren’t well reduced to low order correlations.

After eighteen years of being a professor, I’ve graded many student essays. And while I usually try to teach a deep structure of concepts, what the median student actually learns seems to mostly be a set of low order correlations. They know what words to use, which words tend to go together, which combinations tend to have positive associations, and so on. But if you ask an exam question where the deep structure answer differs from answer you’d guess looking at low order correlations, most students usually give the wrong answer.

Simple correlations also seem sufficient to capture most polite conversation talk, such as the weather is nice, how is your mother’s illness, and damn that other political party. Simple correlations are also most of what I see in inspirational TED talks, and when public intellectuals and talk show guests pontificate on topics they really don’t understand, such as quantum mechanics, consciousness, postmodernism, or the need always for more regulation everywhere. After all, media entertainers don’t need to understand deep structures any better than do their audiences.

Let me call styles of talking (or music, etc.) that rely mostly on low order correlations “babbling”. Babbling isn’t meaningless, but to ignorant audiences it often appears to be based on a deeper understanding than is actually the case. When done well, babbling can be entertaining, comforting, titillating, or exciting. It just isn’t usually a good place to learn deep insight.

As we slowly get better at statistics and machine learning, our machines will slowly get better at babbling. The famous Eliza chatbot went surprisingly far using very low order correlations, and today chatbots best fool us into thinking they are human when they stick to babbling style conversations. So what does a world of better babblers look like?

First, machines will better mimic low quality student essays, so schools will have to try harder to keep such students from using artificial babblers.

Second, the better machines get at babbling, the more humans will try to distinguish themselves from machines via non-babbling conversational styles. So expect less use of simple easy-to-understand-and-predict speech in casual polite conversation, inspirational speeches, and public intellectual talk.

One option is to put a higher premium on talk that actually makes deep sense, in terms of deep concepts that experts understand. That would be nice for those of us who have always emphasized such things. But alas there are other options.

A second option is to put a higher premium on developing very distinctive styles of talking. This would be like how typical popular songs from two centuries ago could be sung and enjoyed by most anyone, compared to how popular music today is matched in great detail to the particular features of particular artists. Imagine most all future speakers having as distinct a personal talking style.

A third option is more indirect, ironic, and insider style talk, such as we tend to see on Twitter today. People using words and phrases and cultural references in ways that only folks very near in cultural space can clearly accept as within recent local fashion. Artificial babblers might not have enough data to track changing fashions in such narrow groups.

Bottom line: the more kinds of conversation styles that simple machines can manage, the more humans will try to avoid talking in those styles, a least when not talking to machines.

GD Star Rating
loading...
Tagged as: , ,

The Robot Protocol

Talking with a professor of robotics, I noticed a nice approachable question at the intersection of social science, computer science, and futurism.

Someday robots will mix with humans in public, walking our streets, parks, hospitals, and stores, driving our streets, swimming our waterways, and perhaps flying our skies. Such public robots may vary enormously in their mental and physical capacities, but if they are to mix smoothly with humans in public they then we will probably expect them to maintain a minimal set of common social capacities. Such as responding sensibly to “Who are you?” and “Get out of my way.” And the rest of us would have a new modified set of social norms for dealing with public robots via these capacities.

Together these common robot capacities and matching human social norms would become a “robot protocol.” Once ordinary people and robots makers have adapted to it, this protocol would be a standard persisting across space and time, and relatively hard to change. A standard that diverse robots could also use when interacting with each other in public.

Because it would be a wide and persistent standard, the robot protocol can’t be matched in much detail to the specific local costs of implementing various robot capacities. Instead, it could at best be matched to broad overall trends in such costs. To allow robots to walk among us, we’d try to be forgiving and only expect robots to have capacities that we especially value, and that are relatively cheap to implement in a wide range of contexts.

(Of course this general robot protocol isn’t the only thing that would coordinate robot and human interactions. There’d also be many other more context-dependent protocols.)

One simple option would be to expect each public robot to be “tethered” via fast robust communication to a person on call who can rapidly respond to all queries that the robot can’t handle itself. But it isn’t clear how sufficient this approach will be for many possible queries.

Robots would probably be expected to find and comply with any publicly posted rules for interacting in particular spaces, such as the rules we often post for humans on signs. Perhaps we will simplify such rules for robots. In addition, here are some things that people sometimes say to each other in public where we might perhaps want robots to have analogous capacities:

Who are you? What are you doing here? Why are you following me? Please don’t record me. I’m serving you with this legal warrant. Stop, this is the police! You are not allowed to be here; leave. Non-authorized personnel must evacuate this area immediately. Get out of my way. You are hurting me. Why are you calling attention to me? Can you help me? Can you take our picture? Where is the nearest bathroom? Where is a nearby recharging station? (I may add more here.)

It seems feasible to start now to think about the design of such a robot protocol. Of course in the end a robot protocol might be just a social convention without the force of law, and it may result more from decentralized evolution than centralized design. Even so, we may now know enough about human social preferences and the broad outlines of the costs of robot capacities to start to usefully think about this problem.

GD Star Rating
loading...
Tagged as: , ,

The Great Cycle Rule

History contains a lot of data, but when it comes to the largest scale patterns, our data is very limited. Even so, I think we’d be crazy not to notice whatever patterns we can find at those largest scales, and ponder them. Yes we can’t be very sure of them, but we surely should not ignore them.

I’ve said that history can be summarized as a sequence of roughly exponential growth modes. The three most recent modes were the growth of human foragers, then of farmers, then of industry. Roughly, foragers doubled every quarter million years, farmers every thousand years, and industry every fifteen years. (Before humans, animal brains doubled roughly every 35 million years.)

I’ve previously noted that this sequence shows some striking patterns. Each transition between modes took much less than a previous doubling time. Modes have gone through a similar number of doublings before the next mode appeared, and the factors by which growth rates increased have also been similar.  In addition, the group size that typified each mode was roughly the square of that of the previous mode, from thirty for foragers to a thousand for farmers to a million for industry.

In this post I report a new pattern, about cycles. Some cycles, such as days, months, and years, are common to most animals days, months, years. Other cycles, such as heartbeats lasting about a second and lifetimes taking threescore and ten, are common to humans. But there are other cycles that are distinctive of each growth mode, and are most often mentioned when discussing the history of that mode.

For example, the 100K year cycle of ice ages seems the most discussed cycle regarding forager history. And the two to three century cycle of empires, such as documented by Turchin, seems most discussed regarding the history of farmers. And during our industry era, it seems we most discuss the roughly five year business cycle.

The new pattern I recently noticed is that each of these cycles lasts roughly a quarter to a third of its mode’s doubling time. So a mode typically grows 20-30% during one period of its main cycle. I have no idea why, but it still seems a pattern worth noting, and pondering.

If a new mode were to follow these patterns, it would appear in the next century, after a transition of ten years or less, and have a doubling time of about a month, a main cycle of about a week, and a typical group size of a trillion. Yes, these are only very rough guesses. But they still seem worth pondering.

GD Star Rating
loading...
Tagged as:

I’m In Europe This Week

Catch me at one of six talks I’ll give in Europe this week on Age of Em:

GD Star Rating
loading...
Tagged as:

Big Software Firm Bleg

I haven’t yet posted much on AI as Software. But now I’ll say more, as I want to ask a question.

Someday ems may replace humans in most jobs, and my first book talks about how that might change many things. But whether or not ems are the first kind of software to replace humans wholesale in jobs, eventually non-em software may plausibly do this. Such software would replace ems if ems came first, but if not then such software would directly replace humans.

Many people suggest, implicitly or explicitly, that non-em software that takes over most jobs will differ in big ways from the software that we’ve seen over the last seventy years. But they are rarely clear on what exact differences they foresee. So the plan of my project is to just assume our past software experience is a good guide to future software. That is, to predict the future, one may 1) assume current distributions of software features will continue, or 2) project past feature trends into future changes, or 3) combine past software feature correlations with other ways we expect the future to differ.

This effort may encourage others to better clarify how they think future software will differ, and help us to estimate the consequences of such assumptions. It may also help us to more directly understand a software-dominated future, if there are many ways that future software won’t greatly change.

Today, each industry makes a kind of stuff (product or service) we want, or a kind of stuff that helps other industries to make stuff. But while such industries are often dominated by a small number of firms, the economy as a whole is not so dominated. This is mainly because there are so many different industries, and firms suffer when they try to participate in too many industries. Will this lack of concentration continue into a software dominated future?

Today each industry gets a lot of help from humans, and each industry helps to train its humans to better help that industry. In addition, a few special industries, such as schooling and parenting, change humans in more general ways, to help better in a wide range of industries. In a software dominated future, humans are replaced by software, and the schooling and parenting industries are replaced by a general software industry. Industry-independent development of software would happen in the general software industry, while specific adaptations for particular industries would happen within those industries.

If so, the new degree of producer concentration depends on two key factors: what fraction of software development is general as opposed to industry-specific, and how concentrated is this general software industry. Regarding this second factor, it is noteworthy that we now see some pretty big players in the software industry, such as Google, Apple, and Microsoft. And so a key question is the source of this concentration. That is, what exactly are the key advantages of big firms in today’s software market?

There are many possibilities, including patent pools and network effects among customers of key products. Another possibility, however, is one where I expect many of my readers to have relevant personal experience: scale economies in software production. Hence this bleg – a blog post asking a question.

If you are an experienced software professional who has worked both at a big software firm and also in other places, my key question for you is: by how much was your productive efficiency as a software developer increased (or decreased) due to working at a big software firm?  That is, how much more could you get done there that wasn’t attributable to having a bigger budget to do more, or to paying more for better people, tools, or resources. Instead, I’m looking for the net increase (or decrease) in your output due to software tools, resources, security, oversight, rules, or collaborators that are more feasible and hence more common at larger firms. Ideally you answer will be in the form of a percentage, such as “I seem to be 10% more productive working at a big software firm.”

Added 3:45p: I meant “productivity” in the economic sense of the inputs required to produce a given output, holding constant the specific kind of output produced. So this kind of productivity should ignore the number of users of the software, and the revenue gained per user. But if big vs small firms tend to make different kinds of software, which have different costs to make, those differences should be taken into account. For example, one should correct for needing more man-hours to add a line of code in a larger system, or in a more secure or reliable system.

GD Star Rating
loading...
Tagged as: , ,

When Does Evidence Win?

Consider a random area of intellectual inquiry, and a random intellectual who enters this area. When this person first arrives, a few different points of view seemed worthy of consideration in this area. This person then becomes expert enough to favor one of these views. Then over the following years and decades the intellectual world comes to more strongly favor one of these views, relative to the others. My key question is: in what situations do such earlier arrivals, on average, tend to approve of this newly favored position?

Now there will be many cases where favoring a point helps people to be seen an intellectual of a certain standing. For example, jumping on an intellectual fashion could help one to better publish, and then get tenure. So if we look at tenured professors, we might well see that they tended to favor new fashions. To exclude this effect, I want to apply whatever standard is used to pick intellectuals before they choose their view on this area.

There will also be an effect whereby intellectuals move their work to focus on new areas even if they don’t actually think they are favored by the weight of evidence. (By “evidence” here I also mean to include relevant intellectual arguments.) So I don’t want to rely on the areas where people work to judge which areas they favor. I instead need something more like a survey that directly asks intellectuals which views they honestly think are favored by the weight of evidence. And I need this survey to be private enough for respondents to not fear retribution or disapproval for expressed views. (And I also want them to be intellectually honest in this situation.)

Once we are focused on people who were already intellectuals of some standing when they choose their views in an area, and on their answers to a private enough survey, I want to further distinguish between areas where relevant strong and clear evidence did or did not arrive. Strong evidence favors one of the views substantially, and clear evidence can be judged and understood by intellectuals at the margins of the field, such as those in neighboring fields or with less intellectual standing. These can included students, reporters, grant givers, and referees.

In my personal observation, when strong and clear evidence arrives, the weight of opinion does tend to move toward the views favored by this evidence. And early arrivals to the field also tend to approve. Yes many such intellectuals will continue to favor their initial views because the rise of other views tends to cut the perceived value of their contributions. But averaging over people with different views, on net opinion moves to favor the view that evidence favors.

However, the effectiveness of our intellectual world depends greatly on what happens in the other case, where relevant evidence is not clear and strong. Instead, evidence is weak, so that one must weigh many small pieces of evidence, and evidence is complex, requiring much local expertise to judge and understand. If even in this case early arrivals to a field tend to approve of new favored opinions, that (weakly) suggests that opinion is in fact moved by the information embodied in this evidence, even when it is weak and complex. But if not, that fact (weakly) suggests that opinion moves are mostly due to many other random factors, such as new political coalitions within related fields.

While I’ve outlined how one might do a such a survey, I have not actually done it. Even so, over the years I have formed opinions on areas where my opinions did not much influence my standing as an intellectual, and where strong and clear evidence has not yet arrived. Unfortunately, in those areas I have not seen much of a correlation between the views I see as favored on net by weak and complex evidence, and the views that have since become more popular. Sometimes fashion favors my views, and sometimes not.

In fact, most who choose newly fashionable views seem unaware of the contrary arguments against those views and for other views. Advocates for new views usually don’t mention them and few potential converts ask for them. Instead what matters most is: how plausible does the evidence for a view offered by its advocates seem to those who know little about the area. I see far more advertising than debate.

This suggests that most intellectual progress should be attributed to the arrival of strong and clear evidence. Other changes in intellectual opinion are plausibly due to a random walk in the space of other random factors. As a result, I have prioritized my search for strong and clear evidence on interesting questions. And I’m much less interested than I once was in weighing the many weak and complex pieces of evidence in other areas. Even if I can trust myself to judge such evidence honestly, I have little faith in my ability to persuade the world to agree.

Yes if you weigh such weak and complex evidence, you might come to a conclusion, argue for it, and find a world that increasingly agrees with you. And you might let your self then believe that you are in a part of the intellectual world with real and useful intellectual progress, progress to which you have contributed. Which would feel nice. But you should consider the possibility that this progress is illusory. Maybe for real progress, you need to instead chip away at hard problems, via strong and clear evidence.

GD Star Rating
loading...
Tagged as: , ,

Cowen On Complacency

A week ago I summarized and critiqued five books wherein Peter Turchin tries to document and explain two key historical cycles: a several century cycle of empires rising and falling, and a fifty year alternating-generations cycle of instability during empire low points. In his latest book, Turchin tentatively tries to apply his theories to predict the U.S. near future.

In his new book The Complacent Class, Tyler Cowen also takes a bigger-than-usual historical perspective, invokes cycles, and predicts the U.S. near future. But instead of applying a theory abstracted from thousands of years of data, Cowen mainly just details many particular trends in the U.S. over the last half century. David Brooks summarizes:

Cowen shows that in sphere after sphere, Americans have become less adventurous and more static.

The book page summarizes:

Our willingness to move, take risks, and adapt to change have produced a dynamic economy. .. [But] Americans today .. are working harder than ever to avoid change. We’re moving residences less, marrying people more like ourselves and choosing our music and our mates based on algorithms. .. This cannot go on forever. We are postponing change,.. but ultimately this will make change, when it comes, harder. .. eventually lead to a major fiscal and budgetary crisis.

In each particular area, Cowen documents specific trends, and he often offers specific local theories that could have led one to expect such trends. For example, he says fewer geographic moves are predicted from fewer job moves, and fewer job moves are predicted by workers being older. But when it comes to the question of why all these particular trends with their particular causes happen to create a consistent overall trend toward complacency, Cowen seems to me coy. Let me discuss three passages where I find that he at least touches on general accounts. Continue reading "Cowen On Complacency" »

GD Star Rating
loading...
Tagged as: , ,

The Elephant in the Brain

One of the most frustrating things about writing physical books is the long time delays. It has been 17 months since I mentioned my upcoming book here, and now, 8.5 months after we submitted the full book for review, & over 4 months after 7 out of 7 referees said “great book, as it is”, I can finally announce that The Elephant in the Brain: Hidden Motives in Everyday Life, coauthored with Kevin Simler, will officially be published January 1, 2018. Sigh. See summary & detailed outline at the book’s website.

A related sad fact is that the usual book publicity equilibrium adds to intellectual inequality. Since most readers want to read books about which they’ve heard much publicity lately from multiple sources, publishers try to concentrate publicity into a narrow time period around the official publication date. Which makes sense.

But to create that burst of publicity, one must circulate the book well in advance privately among “thought leaders”, who might blurb or review it, invite the authors to talk on it, or recommend it to others who might do these things. So people who plausibly fit these descriptions get to read such books long before others. This lets early readers seem to be wise judges of future popular talk directions. Not because they actually have better judgement, but because they get inside info.

Alas, I’m stuck in this same equilibrium. I have a full copy of my final book, except for minor copy-editing changes, and I can share it privately with possible publicity helpers. And when the relative cost to send an email is small relative to possible gains, a small chance may be enough. I’ll also give in to some requests based on friendship or prior help given me (as on my last book), especially when combined with promises to buy the book when it comes out.

But just as grading is the worst part of teaching, I hate being put in the role of bouncer, deciding who is cool enough to be let into my book club, or who has enough favors to trade. At least when teaching I’m expert in whatever topic I’m grading. But here I’m much less expert on deciding who can help book publicity. I’d really prefer the intellectual world to be more of an open competition without favoritism for those with inside connections. But here I am, forced to play favorites.

These are a few of the prices one pays today to publish books. But still, books remain an unparalleled way to call attention to ideas that need more space to explain than an article can offer. And for a relatively unknown author, established publishers still offer more attention than you could generate on your own. But maybe, just maybe, I can do something different with my third book, whatever that may be on.

GD Star Rating
loading...
Tagged as: , ,

Cycles of War & Empire

I’ve just read five of Peter Turchin’s books: Historical Dynamics (2003), War & Peace & War (2006), Secular Cycles (2009), Ultra Society (2015), and Ages of Discord (2016). Four of them in the last week. I did this because I love careful big picture thinking, and Turchin is one of the few who does this now on the big question of historical cycles of conflict and empire. While historians today tend to dislike this sort of analysis, Turchin defies them, in part because he’s officially a biologist. I bow to honor his just defiance and careful efforts.

Turchin’s main story is a modest variation on related farmer-era historical cycle stories, such as by Jack Goldstone in 1991, & Ibn Khaldun in 1377 (!):

Different groups have different degrees of cooperation .. cohesiveness and solidarity. .. Groups with high [cohesion] arise on .. frontier .. area where an imperial boundary coincides with a fault line between two [ethnic] communities .. places where between group competition is very intense. .. Only groups possessing high levels of [cohesion] can construct large empires. ..

Stability and internal peace bring prosperity, and prosperity causes population increase .. leads to overpopulation, .. causes lower wages, higher land rents, and falling per capital incomes. At first, low wages and high rents bring unparalleled wealth to the upper class, but as their numbers and appetites grow, they also begin to suffer from falling incomes. Declining standards of life breed discontent and strife. The elites turn to the state for employment and additional income and drive up its expenditures at the same time that the tax revenue declines. .. When the state’s finances collapse, it loses the control of the army and police. Freed from all restraints, strife among the elites escalates into civili war, while the discontent among the poor explodes into popular rebellions.

The collapse of order brings .. famine, war, pestilence, and death. .. Population declines and wages increase, while rents decline. .. Fortunes of the upper classes hit bottom. .. Civil wars thin the ranks of the elites. .. Intra-elite competition subsides, allowing the restoration of order. Stability and internal peace bring prosperity, and another cycle begins. (pp.5-8 W&P&W)

Turchin (& coauthor Nefedov) collect much data to show that this is a robust farmer-era pattern, even if there are many deviations. For example, in Europe, 33 of 43 frontier situations gave rise to big empires, yet only 4 of 57 of non-frontier situations did (p.84 HD). “Secular cycles” vary in duration from one to four centuries; Western Europe saw 8 cycles in 22 centuries, while China saw 8 cycles in 21 centuries (p.306,311 SC). During the low instability part of each cycle, instability shows a rough “alternating generations” 50 year cycle of conflict.

I’ll grant that Turchin seems to have documented a reasonably broad pattern, containing most of his claimed elements. Yes, empires tend to start from frontier groups with high cohesion, and core cohesion changes slowly. First there’s war success and a growing area and population, and bigger cities. Eventually can come crowding and falling wages. Inequality also grows, with more richer elites, and this is quite robust, continuing even after wages fall.

While the amount of external war doesn’t change over the cycle, success in war falls. Many signs of social cohesion decline, and eventually there’s more elite infighting, with crime, duels, misspending state revenue, mistreatment of subordinates, and eventually civil war. Big wars can cut population, and also elite numbers and wealth. Eventually war abates and cohesion rises, though not to as high as when the empire started. A new cycle may begin; empires go through 1-3 cycles before being displaced by another empire.

Just as science fiction is often (usually?) an allegory about issues today, I suspect that historians who blame a particular fault for the fall of the Roman Empire tend to pick faults that they also want to warn against in their own era. Similarly, my main complain about Turchin is that he attributes falling cohesion mainly to increased inequality – an “overproduction” of elites who face “increased competition”. Yes, inequality is much talked about among elites today, but the (less-forager-like) ancients were less focused on it.

As Scheidel said in The Great Leveler, inequality doesn’t seem to cause civil wars, and civil wars tend to increase inequality during and after the war (p.203). External wars reduce inequality for losers and increase it for winners, without changing it much overall. It is only big mass mobilization wars of the 1900s that seem to clearly cause big falls in inequality.

In biology, over multiple generations organisms slowly accumulate genetic mutations, which reduce their fitness. But this degradation is countered by the fact that nature and mates select for better organisms, which have fewer mutations. Similarly, it seems to me that the most straightforward account of the secular cycle is to say since empire founders are selected out of a strong competition for very high cohesion, we should expect cohesion to “regress to the mean” as an empire evolves.

That is, in order to predict most of the observed elite misdeeds later in the secular cycle, all we need to assume is a random walk in cohesion that tends to fall back to typical levels. Yes, we might want to include other effects in our model. For example, civil war may allow a bit more selection for subgroups with more cohesion, and humans may have a psychological inclination to cohere more during and after a big war. But mostly we should just expect cohesion to decline from its initial extreme value, and that’s all a simple model needs.

Yes, Turchin claims that we know more about what causes cohesion declines. But while he goes to great effort to show that the data fit his story on which events happen in what order during cycles, I didn’t see him offering evidence to support his claim that inequality causes less cohesion. He just repeatedly gives examples where inequality happened, and then instability happened, as if that proves that the one caused the other.

We already have good reasons to expect new empires to start with a small area, population, and inequality. And this by itself is enough to predict growing population, which eventually crowds to cut wages, and increasing inequality, which should happen consistently in a very wide range of situations. I don’t see a need for, or data support for, the additional hypothesis that inequality cuts cohesion. We may of course discover more things that influence cohesion, and if so we can add them to our basic secular cycle model. But we don’t need such additions to predict most of the cycle features that Turchin describes.

In his latest book, Turchin points out many U.S. signs today of rising inequality and declining social cohesion, and at the end asks “Will we be capable of taking collective action to avoid the worst of the impending democratic -structural crisis? I hope so.” But I worry that his focus on inequality leads people to think they need to fight harder to cut inequality. In contrast, what we mostly need is just to fight less. The main way that inequality threatens to destroy us is that we are tempted to fight over it. Instead, let us try more to see ourselves as an “us” contrasted with a “them”, an us that needs to stick together, in part via chilling and compromising, especially regarding divisive topics like inequality.

GD Star Rating
loading...
Tagged as: , ,

On Homo Deus

Historian Yuval Harari’s best-selling book Sapiens mostly talked about history. His new book, Homo Deus, won’t be released in the US until February 21, but I managed to find a copy at the Istanbul airport – it came out in Europe last fall. This post is about the book, and it is long and full of quotes; you are warned. Continue reading "On Homo Deus" »

GD Star Rating
loading...
Tagged as: ,