Search Results for: singleton

Singletons Rule OK

Reply toTotal Tech Wars

How does one end up with a persistent disagreement between two rationalist-wannabes who are both aware of Aumann’s Agreement Theorem and its implications?

Such a case is likely to turn around two axes: object-level incredulity ("no matter what AAT says, proposition X can’t really be true") and meta-level distrust ("they’re trying to be rational despite their emotional commitment, but are they really capable of that?").

So far, Robin and I have focused on the object level in trying to hash out our disagreement.  Technically, I can’t speak for Robin; but at least in my own case, I’ve acted thus because I anticipate that a meta-level argument about trustworthiness wouldn’t lead anywhere interesting.  Behind the scenes, I’m doing what I can to make sure my brain is actually capable of updating, and presumably Robin is doing the same.

(The linchpin of my own current effort in this area is to tell myself that I ought to be learning something while having this conversation, and that I shouldn’t miss any scrap of original thought in it – the Incremental Update technique. Because I can genuinely believe that a conversation like this should produce new thoughts, I can turn that feeling into genuine attentiveness.)

Yesterday, Robin inveighed hard against what he called "total tech wars", and what I call "winner-take-all" scenarios:

Robin:  "If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury."

Robin and I both have emotional commitments and we both acknowledge the danger of that.  There’s nothing irrational about feeling, per se; only failure to update is blameworthy.  But Robin seems to be very strongly against winner-take-all technological scenarios, and I don’t understand why.

Among other things, I would like to ask if Robin has a Line of Retreat set up here – if, regardless of how he estimates the probabilities, he can visualize what he would do if a winner-take-all scenario were true.

Continue reading "Singletons Rule OK" »

GD Star Rating
loading...

I Still Don’t Get Foom

Back in 2008 my ex-co-blogger Eliezer Yudkowsky and I discussed his “AI foom” concept, a discussion that we recently spun off into a book. I’ve heard for a while that Nick Bostrom was working on a book elaborating related ideas, and this week his Superintelligence was finally available to me to read, via Kindle. I’ve read it now, along with a few dozen reviews I’ve found online. Alas, only the two reviews on GoodReads even mention the big problem I have with one of his main premises, the same problem I’ve had with Yudkowsky’s views. Bostrom hardly mentions the issue in his 300 pages (he’s focused on control issues).

All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain. Continue reading "I Still Don’t Get Foom" »

GD Star Rating
loading...
Tagged as: , , ,

Devil’s Offers

Previously in seriesHarmful Options

An iota of fictional evidence from The Golden Age by John C. Wright:

    Helion had leaned and said, "Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command.  You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit.  There are two temptations which will threaten you.  First, you will be tempted to remove your human weaknesses by abrupt mental surgery.  The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain.  Second, you will be tempted to indulge your human weakness.  The Cacophiles do this, and to a lesser degree, so do the Black Manorials.  Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden.  Free men may freely harm themselves, provided only that it is only themselves that they harm."
    Phaethon knew what his sire was intimating, but he did not let himself feel irritated.  Not today.  Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
    Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second.  Many folk were not trusted with the full powers of an adult until they reached their Centennial.  Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early…

Continue reading "Devil’s Offers" »

GD Star Rating
loading...

Two Visions Of Heritage

Eliezer and I seem to disagree on our heritage.

I see our main heritage from the past as all the innovations embodied in the design of biological cells/bodies, of human minds, and of the processes/habits of our hunting, farming, and industrial economies.  These innovations are mostly steadily accumulating modular "content" within our architectures, produced via competitive processes and implicitly containing both beliefs and values.  Architectures also change at times as well.

Since older heritage levels grow more slowly, we switch when possible to rely on newer heritage levels.  For example, we once replaced hunting processes with farming processes, and within the next century we may switch from bio to industrial mental hardware, becoming ems.  We would then rely far less on bio and hunting/farm heritages, though still lots on mind and industry heritages.  Later we could make AIs by transferring mind content to new mind architectures.  As our heritages continued to accumulate, our beliefs and values should continue to change. 

I see the heritage we will pass to the future as mostly avoiding disasters to preserve and add to these accumulated contents.  We might get lucky and pass on an architectural change or two as well.  As ems we can avoid our bio death heritage, allowing some of us to continue on as ancients living on the margins of far future worlds, personally becoming a heritage to the future.

Continue reading "Two Visions Of Heritage" »

GD Star Rating
loading...
Tagged as: , ,

Evolved Desires

To a first approximation, the future will either be a singleton, a single integrated power choosing the future of everything, or it will be competitive, with conflicting powers each choosing how to perpetuate themselves.  Selection effects apply robustly to competition scenarios; some perpetuation strategies will tend to dominate the future.  To help us choose between a singleton and competition, and between competitive variations, we can analyze selection effects to understand competitive scenarios.  In particular, selection effects can tell us the key feature without which it is very hard to forecast: what creatures want.

This seems to me a promising place for mathy folks to contribute to our understanding of the future.  Current formal modeling techniques are actually up to this task, and theorists have already learned lots about evolved preferences:

Continue reading "Evolved Desires" »

GD Star Rating
loading...
Tagged as:

Total Tech Wars

Eliezer Thursday:

Suppose … the first state to develop working researchers-on-a-chip, only has a one-day lead time. …  If there’s already full-scale nanotechnology around when this happens … in an hour … the ems may be able to upgrade themselves to a hundred thousand times human speed, … and in another hour, …  get the factor up to a million times human speed, and start working on intelligence enhancement. … One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you’d have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed.

Carl Shulman Saturday and Monday:

I very much doubt that any U.S. or Chinese President who understood the issues would fail to nationalize a for-profit firm under those circumstances. … It’s also how a bunch of social democrats, or libertarians, or utilitarians, might run a project, knowing that a very likely alternative is the crack of a future dawn and burning the cosmic commons, with a lot of inequality in access to the future, and perhaps worse. Any state with a lead on bot development that can ensure the bot population is made up of nationalists or ideologues (who could monitor each other) could disarm the world’s dictatorships, solve collective action problems … [For] biological humans [to] retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough … But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

Every new technology brings social disruption. While new techs (broadly conceived) tend to increase the total pie, some folks gain more than others, and some even lose overall.  The tech’s inventors may gain intellectual property, it may fit better with some forms of capital than others, and those who first foresee its implications may profit from compatible investments.  So any new tech can be framed as a conflict, between opponents in a race or war.

Every conflict can be framed as a total war. If you believe the other side is totally committed to total victory, that surrender is unacceptable, and that all interactions are zero-sum, you may conclude your side must never cooperate with them, nor tolerate much internal dissent or luxury.  All resources must be devoted to growing more resources and to fighting them in every possible way.

Continue reading "Total Tech Wars" »

GD Star Rating
loading...
Tagged as: , , , ,

When Life Is Cheap, Death Is Cheap

Carl, thank you for thoughtfully engaging my whole brain emulation scenario.  This is my response.

Hunters couldn’t see how exactly a farming life could work, nor could farmers see how exactly an industry life could work.  In both cases the new life initially seemed immoral and repugnant to those steeped in prior ways.  But even though prior culture/laws typically resisted and discouraged the new way, the few groups which adopted it won so big others were eventually converted or displaced.

Carl considers my scenario of a world of near-subsistence-income ems in a software-like labor market, where millions of cheap copies are made of a each expensively trained em, and then later evicted from their bodies when their training becomes obsolete.  Carl doesn’t see how this could work:

The Alices now know that Google will shortly evict them, the genocide of a tightly knit group of millions: will they peacefully comply with that procedure? Or will they use politics, violence and any means necessary to get capital from capital-holders so that they can continue to exist? If they seek allies, the many other ems who expect to be driven out of existence by competitive niche exclusion might be interested in cooperating with them. … In order … that biological humans could retain their wealth as capital-holders in his scenario, ems must be obedient and controllable enough that whole lineages will regularly submit to genocide, even though the overwhelming majority of the population expects the same thing to happen to it soon. But if such control is feasible, then a controlled em population being used to aggressively create a global singleton is also feasible.

I see pathologically-obedient personalities neither as required for my scenario, nor as clearly leading to a totalitarian world regime.

Continue reading "When Life Is Cheap, Death Is Cheap" »

GD Star Rating
loading...
Tagged as: ,

“Evicting” brain emulations

Follow up to: Brain Emulation and Hard Takeoff

Suppose that Robin’s Crack of a Future Dawn scenario occurs: whole brain emulations (’ems’) are developed, diverse producers create ems of many different human brains, which are reproduced extensively until the marginal productivity of em labor approaches marginal cost, i.e. Malthusian near-subsistence wages. Ems that hold capital could use it to increase their wealth by investing, e.g. by creating improved ems and collecting the fruits of their increased productivity, by investing in hardware to rent to ems, or otherwise. However, an em would not be able to earn higher returns on its capital than any other investor, and ems with no capital would not be able to earn more than subsistence (including rental or licensing payments). In Robin’s preferred scenario, free ems would borrow or rent bodies, devoting their wages to rental costs, and would be subject to "eviction" or "repossession" for nonpayment.

In this intensely competitive environment, even small differences in productivity between em templates will result in great differences in market share, as an em template with higher productivity can outbid less productive templates for scarce hardware resources in the rental market, resulting in their "eviction" until the new template fully supplants them in the labor market. Initially, the flow of more productive templates and competitive niche exclusion might be driven by the scanning of additional brains with varying skills, abilities, temperament, and values, but later on em education and changes in productive skill profiles would matter more.

Continue reading "“Evicting” brain emulations" »

GD Star Rating
loading...
Tagged as: ,

My Local Hospital

Most people don’t think much of congress, but think better of their own representatives.  When people read newspaper articles about events they know personally, they are surprised to see how wrong such articles can be.  These illustrate the importance of checking your general beliefs against specific cases you come across.  In this spirit, I take special notice of a Washington Post column about my local hospital.  It seems regulation has restricted entry, driving up prices and profits of local hospitals, and so the columnist thinks governments should subsidize those hospitals’ efforts to attract rich patients from around the world. 

One of the most successful businesses in the Washington area [is] … a not-for-profit named Inova Health System. Over the past 40 years, what started out as a loose affiliation of three community hospitals in Fairfax County has transformed itself into the dominant provider of hospital and medical services in one of the richest and fastest growing regions of the country. And Inova’s Fairfax facility has become the best hospital in the Washington region, with nationally ranked programs in treatment of cancer and digestive disorders, endocrinology, gynecology and heart surgery.

Much of the credit for Inova’s success goes to Knox Singleton, … one of the toughest, shrewdest and most ambitious business executives in the region.  As chief executive over the past 23 years, he’s bought up local competitors and cleverly used the legal, regulatory and political machinery to deny national hospital chains entry into the market. Now Singleton has visions of competing with the likes of the Mayo, Scripps or Cleveland clinics in attracting wealthy or interesting patients from all around the world. … Inova’s handicap, however, is that unlike other "destination" medical centers, it doesn’t have a world-class research program or a full-blown medical education program that usually goes with it. …

So far, Singleton has failed to convince any of the major medical schools in the region to set up a full-fledged program at Inova’s Fairfax campus. … In the case of Inova’s most likely partner, Virginia Commonwealth University, politics also comes into play: With the state facing a budget deficit, getting downstate legislators to finance a new medical school in Northern Virginia looks like a non-starter to VCU President Eugene Trani.  This is the sort of short-sighted approach to public investment and economic development we have come to expect from Virginia and its legislators.   

This sure seems more about signaling regional pride than about helping sick people cope with rising medical costs. 

GD Star Rating
loading...
Tagged as:

Normative Bayesianism and Disagreement

Normative Bayesianism says that you ought to believe as you would if you were an ideal Bayesian believer and so believing is what it is to believe rationally. An ideal Bayesian believer has (1) beliefs by having credences, where a credence is a degree of belief in a proposition; (2) has a Prior = a complete consistent set of credences (capitalized to avoid confusing priors = a person’s credences with Priors = a plurality of complete consistent sets of credences), that is to say, has a credence function from the sigma algebra of propositions into the reals such that the credence function is a measure that is a probability function; (3) changes his beliefs on the basis of the evidence he has acquired by updating his credence function by the use of Bayes’ theorem.

Much of the earlier discussion about the rationality of disagreement and the requirement of modesty was advanced on the basis of the claim that Bayesian believers cannot rationally disagree. But there are different versions of what precisely that claim might be.

Strong Bayesian Agreement: Ideal Bayesian believers who have common knowledge of each others opinion of a proposition agree on that proposition.

Moderate Bayesian Agreement: Ideal Bayesian believers who have rational Priors and common knowledge of each others opinion of a proposition agree on that proposition.

Weak Bayesian Agreement: Ideal Bayesian believers who have a common Prior and common knowledge of each others opinion of a proposition agree on that proposition.

Continue reading "Normative Bayesianism and Disagreement" »

GD Star Rating
loading...
Tagged as: , ,