Tag Archives: Book

MacAskill on Value Lock-In

Will MacAskill has a new book out today, What We Owe The Future, most of which I agree with, even if that doesn’t exactly break new ground. Yes, the future might be very big, and that matters a lot, so we should be willing to do a lot to prevent extinction, collapse, or stagnation. I hope his book induces more careful future analysis, such as I tried in Age of Em. (FYI, MacAskill suggested that book’s title to me.) I also endorse his call for more policy and institutional experimentation. But, as is common in book reviews, I now focus on where I disagree.

Aside from the future being important, MacAskill main concern in his book is “value lock-in”, by which he means a future point in time when the values that control actions stop changing. But he actually mixes up two very different processes by which this result might arise. First, an immortal power with stable values might “take over the world”, and prevent deviations from its dictates. Second, in a stable universe decentralized competition between evolving entities might pick out some most “fit” values to be most common.

MacAskill’s most dramatic predictions are about this first “take over” process. He claims that the next century or so is the most important time in all of human history:

We hold the entire future in our hands. … By choosing wisely, we can be pivotal in putting humanity on the right course. … The values that humanity adopts in the next few centuries might shape the entire trajectory of the future. … Whether the future is government by values that are authoritarian or egalitarian, benevolent or sadistic, exploratory or rigid, might well be determined by what happens this century.

His reason: we will soon create AGI, or ems, who, being immortal, have forever stable values. Some org will likely use AGI to “take over the world”, and freeze in their values forever:

Advanced artificial intelligence could enable those in power to to lock in their values indefinitely. … Since [AGI] software can be copied with high fidelity, an AGI can survive changes in the hardware instantiating it. AGI agents are potentially immortal. These two features of AGI – potentially rapid technological progress and in-principle immortality – combine to make value lock-in a real possibility. …

Using AGI, there are a number of ways that people could extend their values much farther into the future than ever before. First, people may be able to create AGI agents with goals closely assigned with their own which would act on their behalf. … [Second,] the goals of an AGI could be hard-coded: someone could carefully specify what future white want to see and ensure that the AGI aims to achieve it. … Third, people could potentially “upload”. …

International organizations or private actors may be able to leverage AGI to attain a level of power not seen since the days of the East India Company, which in effect ruled large areas of India. …

A single set of values could emerge. …The ruling ideology could in principle persist as long as civilization does. AGI systems could replicate themselves as many times as they wanted, just as easily as we can replicate software today. They would be immortal, freed from the biological process of aging, able to create back-ups of themselves and copy themselves onto new machines. … And there would not longer be competing value systems that could dislodge the status quo. …

Bostrom’s book Superintelligence. The scenario most closely associated with that book is one in which a single AI agent … quickly developing abilities far greater than the abilities of all of humanity combined. … It would therefore be incentivize to take over the world. … Recent work has looked at a broader range of scenarios. The move from subhuman intelligence to super intelligence need not be ultrafast or discontinuous to post a risk. And it need not be a single AI that takes over; it could be many. …

Values could become even more persistent in the future if a single value system were to become global dominant. If so, then the absence of conflict and competition would remove one reason for change in values over time. Conquest is the most dramatic pathway … and it may well be the most likely.

Now mere immortality seems far from sufficient to create either value stability or a takeover. On takeover, immortality is insufficient. Not only is a decentralized world of competing immortals easy to imagine, but in fact until recently individual bacteria, who very much compete, were thought to be immortal.

On values, immortality also seems far from sufficient to induce stable values. Human organizations like firms, clubs, cities, and nations seem to be roughly immortal, and yet their values often greatly change. Individual humans change their values over their lifetimes. Computer software is immortal, and yet its values often change, and it consistently rots. Yes, as I mentioned in my last post, some imagine that AGIs have a special value modularity that can ensure value stability. But we have many good reasons to doubt that scenario.

Thus MacAskill must be positing that a power who somehow manages to maintain stable values takes over and imposes its will everywhere forever. Yet the only scenario he points to that seems remotely up to this task is Bostrom’s foom scenario. MacAskill claims that other scenarios are also relevant, but doesn’t even try to show how they could produce this result. For reasons I’ve given many times before, I’m skeptical of foom-like scenarios.

Furthermore, let me note that even if one power came to dominate Earth’s civilization for a very long time, it would still have to face competition from other grabby aliens in roughly a billion years. If so, forever just isn’t at issue here.

While MacAskill doesn’t endorse any regulations to deal with this stable-AGI-takes-over scenario, he does endorse regulations to deal with the other path to value stability: evolution. He wants civilization to create enough of a central power that it could stop change for a while, and also limit competition between values.

The theory of cultural evolution explains why many moral changes are contingent. … the predominant culture tends to entrench itself. … results in a world increasingly dominated by cultures with traits that encourage and enable entrenchment and thus persistence. …

If we don’t design our institutions to govern this transition well – preserving a plurality of values and the possibility of desirable moral progress. …

A second way for a culture to become more powerful is immigration [into it]. … A third way in which a cultural trait can gain influence is if it gives one group greater ability to survive or thrive in a novel environment. … A final way in which one culture can outcompete another is via population growth. … If the world converged on a single value system, there would be much less pressure on those values to change over time.

We should try to ensure that we have made as much moral progress as possible before any point of lock-in. … As an ideal, we could aim for what we could call the long reflection: a stable state of the world in which we are safe from calamity and can reflect on and debate the nature of the good life, working out what the more flourishing society would be. … It would therefore be worth spending many centuries to ensure that we’ve really figured things out before taking irreversible actions like locking in values or spreading across the stars. …

We would need to keep our options open as much as possible … a reason to prevent smaller-scale lock-ins … would favor political experimentation – increasing cultural and political diversity, if possible. …

That one society has greater fertility than another or exhibits faster economic growth does not imply that society is morally superior. In contrast, the most important mechanisms for improving our moral views are reason, reflection, and empathy, and the persuasion of others based on those mechanisms. … Certain forms of free speech would therefore be crucial to enable better ideas to spread. …

International norms or laws preventing any single country from becoming too populous, just as anti-trust regulation prevents any single company from dominating a market. … The lock-in paradox. We need to lock-in some institutions and ideas in order to prevent a more thorough-going lock-in of values. … If we wish to avoid the lock-in of bad moral views, an entirely laissez-faire approach would not be possible; over time, the forces of cultural evolution would dictate how the future goes, and the ideologies that lead to the greatest military powered that try to eliminate their competition would suppress all others.

I’ve recently described my doubts that expert deliberation has been a large force in value change so far. So I’m skeptical that will be a large force in the future. And the central powers (or global mobs) sufficient to promote a long reflection, or to limit nations competing, seem to risk creating value stability via the central dominance path discussed above. MacAskill doesn’t even consider this kind of risk from his favored regulations.

While competition may produce a value convergence in the long run, my guess is that convergence will happen a lot faster if we empower central orgs or mobs to regulate competition. I think that a great many folks prefer that latter scenario because they believe we know what are the best values, and fear that those values would not win an evolutionary competition. So they want to lock in current values via regs to limit competition and value change.

To his credit, MacAskill is less confident that currently popular values are in fact the best values. And his favored solution of more deliberation probably would’t hurt. I just don’t think he realizes just how dangerous are central powers able to regulate to promote deliberation and limit competition. And he seems way too confident about the chance of anything like foom soon.

GD Star Rating
loading...
Tagged as: ,

Injustice For All

In their new book Injustice for All: How Financial Incentives Corrupted and Can Fix the US Criminal Justice System, Chris Surprenant and Jason Brennan suggest many ways to change the US crime system.

They spend the most space arguing against jail; they want to cut long jail terms, and to offer most criminals a choice of jail or non-jail punishments such as caning. (I also dislike jail.)

This and most of their other suggestions can be seen as fitting a theme of favoring defendants more, relative to government. For example, they want a lot fewer acts to be punished at all, more bad acts to be punished as torts instead of as crimes, loser pays lawyer/court costs, crime law to be clear and simple, a requirement to show the accused could easily know act was criminal, no cash bail, no private prisons, no asset forfeiture, fewer no-knock raids, the same lawyers and resources given to public defense as to prosecution, juries to choose between punishment plans offered by protection & defense, notifying juries of their jury nullification ability, and more grand juries before and during trials who can cancel trials.

While this theme is quite popular today, I’m wary of this focus on changing policy to favor defendants over government. Yes the pendulum may now favor government too much, but someday it will swing the other way, and I’d like to do more than just help push this one pendulum back and forth.

Many other suggestions in the book fall under a theme of spreading out incentives, to make incentives weaker for any one party. These authors attribute many current problems to overly strong incentives, such as that induce small towns to make speed traps. They want government-managed victim restitution funds, no elected judges or prosecutors, local governments to pay more for jail costs, state governments to pay more non-jail costs, and no revenue given to police agencies based on particular cases. And they suggest that the state pay for investigate torts:

For most tort claims, the state would need to bear the responsibility and financial cost of collecting and processing evidence, as well as finding and interviewing witnesses. This information would then be available to both the would-be plaintiff and defendant.

Instead of having the state manage tort investigations, I’d rather we did more to ensure tort damages can be paid, perhaps by adding bounties. Then we could rely more on private incentives to investigate well, instead of trusting the state to do that. More generally, I want to introduce stronger elements of paying for results into criminal law, instead of just weakening incentives all around to avoid bad incentive problems.

Below the fold are many quotes from the book: Continue reading "Injustice For All" »

GD Star Rating
loading...
Tagged as: ,

Firms Are Not Aliens

The title of Tyler Cowen’s new book, Big Business: A Love Letter to an American Anti-Hero, is pretty clear on its topic and stance:

Business, quite simply, has become underrated, and thus I am writing a contrarian book that ought not to be contrarian at all. All of the criticisms one might mount against the corporate form—some of which are valid—pale in contrast to two straightforward and indeed essential virtues. First, business makes most of the stuff we enjoy and consume. Second, business is what gives most of us jobs.

You might think Cowen would defend the claim that big business is a better source for stuff and jobs, compared to other sources like government agencies, non-profits, worker cooperatives, small business, communes, sharing, or self-employment & home production. But in fact Cowen shows little interest in comparing big business to such alternatives. Instead, his book seems to be all about “mood affiliation”, to use a Cowen term. Two reviewers agree:

His theme is correct. Big Business is not the ogre it is made out to be. It is no more deserving of being scapegoated than are other familiar targets. (more)

The book does have a slightly catalogue-ish feel to it, as though Professor Cowen has been (as I suspect) keeping a list of college students’ most common complaints about Big Business (and about capitalism generally) and addressing them in series. (more)

While most of Cowen’s responses to complaints seem spot on to me, this whole situation seems a sad commentary on our complainy culture. Complaints are mainly useful when they push us to evaluate particular proposed fixes. But our political culture today seems largely a game of trying to max the people* time*loudness volume of complaints heard about rivals, regardless of the relative validity or importance of such complaints. Supporting this game, Cowen doesn’t talk much about possible fixes; his focus seems on the overall complaint score.

My main disagreement with Cowen, and its a big one, is that, in his last chapter, his “love” letter reads more like this “praise” of men from the movie Dangerous Liaisons: 

Men enjoy the happiness they feel. We [women] can only enjoy the happiness we give. [Men] are not capable of devoting themselves exclusively to one person. So [for a woman] to hope to be made happy by love is a certain cause of grief. (more)

Cowen tells us to love big businesses, but not to expect them to love us back, as they are incurably selfish aliens:

Why is business so often so unpopular? I think the answers are pretty deeply rooted in human nature: we cannot help judging business by many of the same standards we apply to people. …We turn corporations into people in our minds, and also in our hearts. … we imbue them with human qualities. … It can mislead us, and it is a kind of shorthand that has pitfalls and hazards. … [are] external, autonomous, selfish corporate agents — agents who take our wishes into account only insofar as it suits them. … should be judged not as friends but as abstract, shark-like legal entities devoted to commercial profit. … It is emotionally very hard for people to internalize emotionally the true and correct picture of those businesses as partaking in an impersonal order based on mostly selfish, profit-seeking behavior. …

We judge companies as we might judge a person, sometimes even a family member: in terms of connection and standards of integrity. This is a mistake, because corporations are legal constructs and abstract entities, and they do not have purposes, goals, or feelings of their own. … Precisely because we tend to judge corporations by the standards we use to judge people, it is hard for us to accept the partially venal or sometimes amoral pecuniary or greedy motives operating behind the scenes, and so we moralize about companies instead of trying to understand them. …

When it comes to politics and public policy, we need to distance ourselves from such emotional and anthropomorphized attitudes. We need to stop being loyal to corporations for the sake of loyalty and friendship, and we also need to stop being disappointed in corporations all the time, as if we should be judging them by the standards we apply to individual human beings and particularly our friends. Instead, we should view companies more dispassionately, as part of an abstract legal and economic order with certain virtues and also plenty of imperfections. …

It doesn’t quite work to think of businesses as our friends. Friendship is based in part on an intrinsic loyalty that transcends the benefit received in any particular time and place. Many friendships also rely on an ongoing exchange of reciprocal benefits, yet without direct consideration each and every time of exactly how much reciprocity is needed. In addition to the self-interested joys of friendly togetherness, friendship is about commonality of vision, a wish to see your own values reflected in another, a sense of potential shared sacrifice, and a (partial) willingness to put the interest of the other person ahead of your own, without always doing a calculation about what you will get back.

A corporation just doesn’t fit this mold in the same way. A business may wish to appear to be an embodiment of friendly reciprocity, but it is more like an amoral embodiment of principles that usually but not always work out for the common good. The senior management of the corporation has a legally binding responsibility to maximize shareholder profits, at least subject to the constraints of the law and perhaps other constraints embodied in the company’s charter or bylaws. The exact nature of this fiduciary responsibility will vary, but it never says the company ought to be the consumer’s friend, at least not above and beyond when such friendship may prove instrumentally valuable to the ends of the company, including profit.

In this setting, companies will almost always disappoint us if we judge them by the standards of friendship, as the companies themselves are trying to trick us into doing. Companies can never quite meet the standards of friendship. They’re not even close acquaintances. At best they are a bit like wolves in sheep’s clothing, but these wolves bring your food rather than eat you.

Oddly, Cowen spends much of his book arguing differently, saying why firms have incentives to, and in fact do, act more trustworthy and reliably than do most humans and other organizations. And that firms are not in fact simple profit maximizers:

The common portrait of corporations as consisting entirely of selfish or greedy individuals is not the best understanding of big business. … Goals other than simple profit maximization often end up boosting both business profits and social benefits. For example, the people who work at SpaceX, … often really do believe in the dream of colonizing other planets and the stars. … Friedman failed to understand that the cultural, intellectual, ideological, and even emotional foundations of business go far beyond an attachment to profit. People care about what they do, and they seek meaning through their jobs. Profit maximization is best thought of as a convenient fiction that does a fairly good job boosting profits precisely because it rejects a sole emphasis on profits as a goal. … most successful businesses have a kind of messianic view of their role in society … A business that instills in its workers and managers a sincere belief in such goals has a better chance of building a durable competitive advantage than a business that does not. …

I’m more likely to think of a corporation as a carrier of reputation and a kind of metaphorical personhood, and less likely to think of a corporation as a means of minimizing transactions costs, as many mainstream economists have suggested.

Yet in the end Cowen wants to warn us that all this good stuff is an illusion covering an incurably selfish core. His whole picture seems greatly at odds with the view I elaborate in my book The Elephant in the Brain: Hidden Motives in Everyday Life: that we humans are similarly selfish at core and yet induced to act and look good by our social context and incentives. As friends, we may not consciously consider “each and every time of exactly how much reciprocity is needed”, but unconsciously we very much consider such things. I say that big business is no more essentially selfish than are ordinary humans, and that Cowen has offered no evidence to the contrary.

We humans have had many thousands of years of experience relating to people who need us but are much more powerful than us, and to large social organizations that sit in similar human-like social roles. These are the kinds of human-like relations that we can reasonably expect to have with big business today, and that we do actually have. In those expected roles, big business is not disappointing us nor fooling us; they are in fact more reliable and trustworthy than most of our other relation partners. They can and do meet high standards of integrity. In their roles, they very much do have understandable and relatable “purposes, goals, or feelings of their own”. After so many millennia, this isn’t some strange new situation to which we are poorly adapted. We aren’t at all fooled into thinking of big business as equal lovers or drinking buddies.

Why then do we complain so much about big business, via words, taxes, regulations, and a low political influence? Because just by being big, having money, and seeking money, big business violates ancient forager norms against inegalitarian distribution, overt selfishness, and especially against overtly selfish efforts to achieve unequal dominance. (Money is seen a power to dominate).

We know that the continued existence of those forager norms primes audiences to accept most any complaint we might make against big business. And as in a marriage, we are happy to take advantage of opportunities to complain, even when we have no intention of breaking off the relationship.

Added 5p 22Apr: On his blog, Cowen cryptically replies:

For purposes of context, I see Robin as leading a sustained mood affiliation crusade against hypocrisy, rather than performing comparative analysis of hypocrisy vs. the relevant alternatives.

That may be true, but I don’t see how it is responsive to my critique. Hypocrisy is when someone’s real motives differs from those they present. But I’m struggling to understand Cowen’s comment via somehow mapping the hypocrisy concept onto my post above. The most obvious example of hypocrisy in the above is when ordinary folks pretend to mind big firms behavior, but really mind don’t mind nearly as much as they pretend. I didn’t complain about that hypocrisy in the above, and I can’t see how that application of the hypocrisy concept is relevant to the disagreement that I identify, where he says firms at core just can’t and shouldn’t be trusted and I say they can be trusted as well as other individuals and organizations. So I guess I’m just not understanding him.

GD Star Rating
loading...
Tagged as: , , ,

Tales of the Turing Church

My futurist friend Giulio Prisco has a new book: Tales of the Turing Church. In some ways, he is a reasonable skeptic:

I think all these things – molecular nanotechnology, radical life extension, the reanimation of cryonics patients, mind uploading, superintelligent AI and all that – will materialize one day, but not anytime soon. Probably (almost certainly if you ask me) after my time, and yours. … Biological immortality is unlikely to materialize anytime soon. … Mind uploading … is a better option for indefinite lifespans … I don’t buy the idea of a “post-scarcity” utopia. … I think technological resurrection will eventually be achieved, but … in … more like many thousands of years or more.

However, the core of Prisco’s book makes some very strong claims:

Future science and technology will permit playing with the building blocks of spacetime, matter, energy and life in ways that we could only call magic and supernatural today. Someday in the future, you and your loved ones will be resurrected by very advanced science and technology. Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them. Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent technology to resurrect the dead and remake the universe. …

God exists, controls reality, will resurrect the dead and remake the universe. … Now you don’t have to fear death, and you can endure the temporary separation from your loved departed ones. … Future science and technology will validate and realize all the promises of religion. … God elevates love and compassion to the status of fundamental forces, key drivers for the evolution of the universe. … God is also watching you here and now, cares for you, and perhaps helps you now and then. … God has a perfectly good communication channel with us: our own inner voice.

Now I should note that he doesn’t endorse most specific religious dogma, just what religions have in common:

Many religions have really petty, extremely parochial aspects related to what and when one should eat or drink or what sex is allowed and with whom. I don’t care for this stuff at all. It isn’t even geography – it’s local zoning norms, often questionable, sometimes ugly. … [But] the common cores, the cosmological and mystical aspects of different religions, are similar or at least compatible. 

Even so, Prisco is making very strong claims. And in 339 pages, he has plenty of space to argue for them. But Prisco instead mostly uses his space to show just how many people across history have made similar claims, including folks associated with religion, futurism, and physics. Beyond this social proof, he seems content to say that physics can’t prove him wrong: Continue reading "Tales of the Turing Church" »

GD Star Rating
loading...
Tagged as: , , ,

Can Foundational Physics Be Saved?

Thirty-four years ago I left physics with a Masters degree, to start a nine year stint doing AI/CS at Lockheed and NASA, followed by 25 years in economics. I loved physics theory, and given how far physics had advanced over the previous two 34 year periods, I expected to be giving up many chances for glory. But though I didn’t entirely leave (I’ve since published two physics journal articles), I’ve felt like I dodged a bullet overall; physics theory has progressed far less in the last 34 years, mainly because data dried up:

One experiment after the other is returning null results: No new particles, no new dimensions, no new symmetries. Sure, there are some anomalies in the data here and there, and maybe one of them will turn out to be real news. But experimentalists are just poking in the dark. They have no clue where new physics may be to find. And their colleagues in theory development are of no help.

In her new book Lost in Math, theoretical physicist Sabine Hossenfelder describes just how bad things have become. Previously, physics foundations theorists were disciplined by a strong norm of respecting the theories that best fit the data. But with less data, theorists have turned to mainly judging proposed theories via various standards of “beauty” which advocates claim to have inferred from past patterns of success with data. Except that these standards (and their inferences) are mostly informal, change over time, differ greatly between individuals and schools of thought, and tend to label as “ugly” our actual best theories so far.

Yes, when data is truly scarce, theory must suggest where to look, and so we must choose somehow among as-yet-untested theories. The worry is that we may be choosing badly:

During experiments, the LHC creates about a billion proton-proton collisions per second. … The events are filtered in real time and discarded unless an algorithm marks them as interesting. From a billion events, this “trigger mechanism” keeps only one hundred to two hundred selected ones. … That CERN has spent the last ten years deleting data that hold the key to new fundamental physics is what I would call the nightmare scenario.

One bad sign is that physicists have consistently, confidently, and falsely told each other and the public that big basic progress was coming soon: Continue reading "Can Foundational Physics Be Saved?" »

GD Star Rating
loading...
Tagged as: , , ,

On the Future by Rees

In his broad-reaching new book, On the Future, aging famous cosmologist Martin Rees says aging famous scientists too often overreach:

Scientists don’t improve with age—that they ‘burn out’. … There seem to be three destinies for us. First, and most common, is a diminishing focus on research. …

A second pathway, followed by some of the greatest scientists, is an unwise and overconfident diversification into other fields. Those who follow this route are still, in their own eyes, ‘doing science’—they want to understand the world and the cosmos, but they no longer get satisfaction from researching in the traditional piecemeal way: they over-reach themselves, sometimes to the embarrassment of their admirers. This syndrome has been aggravated by the tendency for the eminent and elderly to be shielded from criticism. …

But there is a third way—the most admirable. This is to continue to do what one is competent at, accepting that … one can probably at best aspire to be on a plateau rather than scaling new heights.

Rees says this in a book outside his initial areas of expertise, a book that has gained many high profile fawning uncritical reviews, a book wherein he whizzes past dozens of topics just long enough to state his opinion, but not long enough to offer detailed arguments or analysis in support. He seems oblivious to this parallel, though perhaps he’d argue that the future is not “science” and so doesn’t reward specialized study. As the author of a book that tries to show that careful detailed analysis of the future is quite possible and worthwhile, I of course disagree.

As I’m far from prestigious enough to get away a book like his, let me instead try to get away with a long probably ignored blog post wherein I take issue with many of Rees’ claims. While I of course also agree with much else, I’ll focus on disagreements. I’ll first discuss his factual claims, then his policy/value claims. Quotes are indented; my responses are not.  Continue reading "On the Future by Rees" »

GD Star Rating
loading...
Tagged as: ,

Age of Em Update

My first book, The Age of Em: Work, Love, and Life When Robots Rule the Earth, is moving along toward its June 1 publication date (in UK, a few weeks later in US). A full book jacket is now available:

hanson_hb

Blurbs are also now available, from: Sean Carroll, Marc Andreessen, David Brin, Andrew McAfee, Erik Brynjolfsson, Matt Ridley, Hal Varian, Tyler Cowen, Vernor Vinge, Steve Fuller, Bryan Caplan, Gregory Benford, Kevin Kelly, Ben Goertzel, Tim Harford, Geoffrey Miller, Tim O’Reilly, Scott Aaronson, Ramez Naam, Hannu Rajaniemi, William MacAskill, Eliezer Yudkowsky, Zach Weinersmith, Robert Freitas, Neil Jacobstein, Ralph Merkle, and Michael Chwe.

Kindle and Audible versions are in the works, as is a Chinese translation.

I have a page that lists all my talks on the book, many of which I’ll also post about here at this blog.

Abstracts for each of the thirty chapters should be available to see within a few weeks.

GD Star Rating
loading...
Tagged as:

Age of Em in Amsterdam

At 6pm on Tuesday, 24 November 2015, I’ll speak at Amsterdam University College on:

The Age of Em: Work, Love and Life when Robots Rule the Earth

Robots may one day rule the world, but what is a robot-ruled earth like? Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with the same connections on a fast computer and you have a robot brain, but recognisably human. Ems make us question common assumptions of moral progress because they reject many of the values we hold dear. Applying decades of expertise in physics, computer science and economics, Robin Hanson uses standard theories to paint a detailed picture of a world dominated by ems. (more)

The day before I’ll speak on the same subject at an invitation-only session of CIO Day. Added: I’ll also be on a panel on Enterprise Prediction Markets during the more open session on Tuesday.

GD Star Rating
loading...
Tagged as: , ,