Open Thread

This is our monthly place to discuss topics that have not appeared in recent posts.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://kruel.co/ Alexander Kruel

    Robin Hanson,

    Are you going to review Eliezer Yudkowsyk’s new paper ‘Intelligence Explosion Microeconomics‘?

    It is supposed to be “a more coherent successor to the AI Foom Debate”.

    • burger flipper

      Right after he tackles the Harry Potter thing, I’d wager

      • Stephen Diamond

        Since this is an open thread: Would someone explain to me why nerds have this great fascination with Harry Potter. I saw a few minutes of a Harry Potter film while in my dentist’s waiting room, and it seemed like an ordinary child’s story, with too much mysticism for me to recommend it to any real child.

      • http://entitledtoanopinion.wordpress.com TGGP

        I wasn’t even aware that nerds generally have a fascination with Potter. I thought of it as kid-lit. Eliezer wrote a fan-fic, but I didn’t know if it was because it appealed to nerds or was simply the best selling book series in a long time.

      • http://www.facebook.com/CronoDAS Douglas Scheinberg

        The films are decent but not really anything special. The books, however, deserve at least some of the acclaim they’re given; I’ve read a lot of fantasy novels marketed to adults and the Harry Potter series is better than most. (FWIW, the first book is probably the worst in the series.) I wouldn’t have expected it to produce the phenomenon that it did, but I would have been surprised if they failed completely.

        There’s also bandwagon effects; like Star Wars and Star Trek, almost everyone is at least familiar with Harry Potter, so you can mention it and people will know what you’re talking about – and if someone doesn’t, there is probably enough social pressure that they’re going to try to find out.

      • J O

        Might be just timing; a lot of people in Gen Y grew up reading Harry Potter, making a Harry Potter rationality fanfic far more accessible for a lot of young people than, say, a Pride an Prejudice rationality fanfic.

        In addition to that, I’ve read all the HP books plus HP:MoR, and I must say that it is actually quite a good candidate for what Yudkowsky has done to it. In fact I find HP:MoR vastly more interesting than the original, which is indeed just a children’s story. Yudkowsky did something to it that I never knew it needed, but boy it sure did.

      • Philip Goetz

        Male-to-female ratio. The two great fiction franchises of the Millenials are Harry Potter and Twilight. Choose one.

    • Stephen Diamond

      Where has Yudkowsky ever demonstrated the barest understanding of microeconomics?

    • Alexander Gabriel

      I am glad you posted this. After a quick perusal, I conclude that the “AI Go Foom” scenario is not as crazy as I had thought.

    • Doug

      For any computational task, whether its running a mind or processing payroll, there’s only two ways to increase its power. Improve the algorithms or add more computing resources.

      For the latter strong A.I. is nothing special. Yes, a strong A.I. could use its intelligence to do things to make money: pick stocks, design inventions, negotiate real estate. Then parlay that money into more and better hardware to get smarter.

      But 50 years of experience with computers tells us that returns to raw computing power are decreasing, not increasing. This is fundamentally no different today than buying a computing rig, renting out processing time and using the proceeds to buy more computers. Needless to say such a strategy will fall far short of “taking over the world.”

      The only option for a foom is a strong A.I. recursively improving its algorithms on a fixed set of hardware. We really only have a very vague sense of the type of algorithms that will run minds, but such a scenario posits that there’s increasing returns to algorithmic improvement.

      There are some fields in computer science where algorithmic improvement offered orders of magnitude speedups against naive implementation, but they’re generally few and far between.

      For a strong A.I. to foom, the mind-emulation algorithms will have to be really really inefficient with a lot of low-hanging fruit for improvement. Otherwise it won’t be able to get a commanding edge quickly over the space of its competitors A.I.s.

      It might improve itself quickly, but if for example it can only get a 100x improvement it will still only contain a small percentage of the total intelligence, as long as it initially controls 1% of less of computer hardware. This seems reasonable based on the current distribution of computer hardware across top-level systems.

      • Alexander Gabriel

        “The only option for a foom is a strong A.I. recursively improving its algorithms on a fixed set of hardware.”

        I just want to point out this is not an accurate description of Yudkowsky’s concept. E.g.,

        It should also be noted that the “global” scenario need not include all of the previous civilization inside its globe. Specifically, biological humans running on 200 Hz neurons with no read-write ports would tend to be left out of the FOOM, unless some AIs are specifically motivated to help humans as a matter of final preferences.

        There is a distinction between a local foom and global foom.

      • http://kruel.co/ Alexander Kruel

        Thanks Doug. Yesterday I summarized some related thoughts here: AI vs. humanity and the lack of concrete scenarios.

  • Trent Fowler

    I’ve been giving thought to writing an article on what, for lack of knowing the correct term, I’m calling ‘hyperpunishment’. I’ve yet to work out any of the details, but I have a fuzzy notion that WBE and uploading technology will allow for a whole new class of retributive actions and deterrents, the ethics of which should be considered well in advance of anyone making use of them.

    There simply is nothing we can currently do to a criminal which would match, say, forcibly uploading their consciousness into a computer and looping their worst memory for a hundred subjective years. Or making them relive their crimes from the perspective of their victims.

    It’s been said that, the universe being unfair, bad people routinely get away with their crimes, and besides what could you do to Genghis Khan or Adolf Hitler that would match what they’d done wrong? Future technologies may allow punishments in proportion to even the worst trespasses, but it remains unclear whether we should ever make use of them, and whether they’d be effective at demotivating would-be offenders.

    Has there been prior work on this? As deeply unpleasant a topic as it is it bears careful thought, given its potential for misuse.

    • Christian Kleineidam

      Torture already exists. We could already hyperpunish by using electro torture for years on people.

      We already agree that we shouldn’t use torture for punishing.

      • Trent Fowler

        Hmmm, it’s a fair point, I’ll have to give it some more thought. I still think it’s different enough to warrant some analysis. Are you of the opinion that there’s nothing new to say here?

      • Christian Kleineidam

        To the extend that there something new I can only imagine it’s about some people being okay with you mistreating a copy of them because they don’t identify with their copy.

      • Stephen Diamond

        We already agree that we shouldn’t use torture for punishing.

        Depends who “we” is. The Guantanamo detainees (many of whom are known to be innocent of terrorism) are being tortured for punishment: the force feeding techniques are condemned by international physicians’ organizations as torture. (You may disagree that the motive is punishment, like I think it is. But still…) I’m waiting for the outcry from the U.S. or the equivalent concern granted to the relative trivia of denying tea party politicians nonpolitical tax status.

        What mind copying would change is that death is no respite. As in Banks’s fiction, it could be practically eternal. Add to that the possibility that a subjective 100 years could consume little real time.

        If I took Robin’s technological fantasies as something other than an intellectual game, it could be another very strong reason for opposing their realization.

      • Trent Fowler

        “What mind copying changes is that death is no respite.”

        Yeah, you could punish someone for the lifespan of a star, and though I’ve never been waterboarded (thank god), I can’t imagine it would compare to what could be done if you reached into a brain and turned it’s suffering knob to 11.

      • Christian Kleineidam

        If you want to punish for deterence purposes force feeding isn’t the way you get maximum deterence.

        I think the straightforward purpose of force feeding is to keep people alive.

      • Stephen Diamond

        We already agree that we shouldn’t use torture for punishing.

        Let’s accept your collective pronoun, arguendo. Then what makes you think these agreements aren’t reversible? Robin thinks an em society will use punishment more freely. As a hypothetical, this seems reasonable. An em society might well have virtual hells.

      • Trent Fowler

        Link to Hanson’s arguments for the view that em society will be quicker to punish?

    • BeoShaffer

      I don’t know about non-fiction work, but it has been addressed in science fiction. Surface Detail by Iain M. Banks is probably the most famous example.

      • Trent Fowler

        I’ll check that out, thanks.

    • Margin

      If you have the resources to loop memories for hundreds of years to punish someone, you can also reward good behavior with mind-boggling offers.

      Also if it prevents hyperholocausts by deterrence, hyperpunishing Hyperhitlers is an idea I find quite likable.

      • Trent Fowler

        Yeah, the inverse had occurred to me. I just wonder what it would do to incentives.

      • Stephen Diamond

        Creating virtual hells is qualitatively easier than creating virtual heavens. Ask people whether they would want to live in a world where they had awesomely positive experiences that aren’t real but felt like they were, many will say no. Ask people whether they would want to avoid living in a world where they had terribly negative experiences they knew weren’t real, and everyone will say yes.

        [The asymmetry makes cryonics a bad bet even if you believe—due to the inverted Pascal’s wager.]

      • http://rulerstothesky.wordpress.com/ Trent Fowler

        A point worth thinking about. This reminded me of some research I read not long ago which found that people are more likely to think other people are wrong about their happiness (as in the case of drug addicts who claim to be happy) than they are to think other people are wrong about their sadness (as in the case of people whose lives seem fulfilling but who are nevertheless depressed).

        “[The asymmetry makes cryonics a bad bet even if you believe—due to the inverted Pascal’s wager.]”

        It seems like that’s only true if it’s also qualitatively easier to make real heavens as opposed to real hells. Granted an intelligence explosion yielding unfriendly AI would probably be very bad, but in the short term at least it seems reasonable to assume that the influence of technology will continue to be positive. So cryonics would be a good bet in the medium term because I’m likely to wake up in a world where whatever problems I have today are easier to solve than they are now.

      • Stephen Diamond

        It seems like that’s only true if it’s also qualitatively easier to make real heavens as opposed to real hells.

        I assume it’s impossible to create either a real heaven or a real hell. (Edit. Although it’s actually a lot easier to create something approaching a hell than a heaven.) But there’s a potential infinity of imaginary terribleness that can can be visited on someone–and some real likelihood that you (if you can believe it’s you) will “wake up” in one.

        [A higher probability, I’d think, that Robin would end up in one. He’s something of a public figure. Maybe a future society will have demonized him.]

    • http://entitledtoanopinion.wordpress.com TGGP

      This is relevant to the “basilisk” argument is prohibited from being discussed at Less Wrong.

      • http://rulerstothesky.wordpress.com/ Trent Fowler

        That was a very valuable reference, thank you. I had no idea about this before your comment.

      • Humbug
      • http://rulerstothesky.wordpress.com/ Trent Fowler

        I did a google search and that’s the article I found 🙂

  • Grubby

    Robin what do you think of bitcoin?

    • sflicht

      Relatedly, what sort of financial system would (will) ems use?

      • IMASBA

        Would they even have money as we know it? Obviously they could, but that doesn’t mean that’s the only way.

      • sflicht

        I think it’s extremely unlikely that it would be money “as we know it”. Especially in the scenario where the ems are extremely small and incredibly fast, it seems hard to conceptualize in our own terms how em commerce would function. One interesting question, I think, is whether ordinary humans would continue to have meaningful economic interaction with ems, once the latter take over the world. We have meaningful economic interaction with trees and honeybees (which examples I mention because both organisms operate at time-scales fairly different from our own, moreso than cows, say). Would human-em commerce eventually become qualitatively similar to that? Could an em currency be a meaningful currency for humans?

      • Philip Goetz

        I wrote a science fiction story many years ago in which AIs used bits of new information as currency. This means information’s value depends on who’s buying it. You could determine its value by having the receiver attempt to predict each next bit before receiving it, and charging for twice the number that they predicted incorrectly.

    • http://lukeparrish.rationalsites.com Luke Parrish

      I would be interested in economist’s arguments on positive and negative factors in bitcoin’s total utility, especially the tradeoffs of deciding to hold onto bitcoins versus donating to charity or investing in business.

      Fractional reserve banking could be possible with bitcoin, but it seems to be disabled by default, and many of the convenience reasons for it are diminished. So instead of buying and selling loans, business is more likely to take the form of selling bitcoins when you have them, in exchange for actual assets/infrastructure designed to help you get more bitcoins.

      I have set aside a few bitcoins with the goal of funding cryonics research and practice down the road. Given my belief that bitcoin is valuable/likely to succeed, it makes sense. But if I am wrong, it would be a waste. Is it sensible that I recommend others do the same for their charitable goals?

  • Alexander Gabriel

    I am curious if anyone has thought about the idea of forming an organization specifically to advocate this. To me, the most likely scenario where humans continue to prosper is one where sentient AI never comes into existence. Assuming that such AI will become practical from natural technological advance during this century, an international treaty or law regime seems like an obvious way to save ourselves. Given the overarching goal of such a treaty, many political questions are decided. For example, the question of whether or not you should support expanding your nation’s military is answered by your nation’s stance on the treaty and by other nations’ stances. If your nation is against the treaty but other nations favor it, you should favor a smaller military and less influence for your own country. If your nation is for the treaty but other nations are against it, you should presumably favor a larger military to increase the forces favoring the treaty. This seems to me a better approach to the AI problem than the one of MIRI. I grant that such a treaty is not an easy solution, but “Friendly AI” strikes me as even more difficult.

  • IMASBA

    @Robin

    1) Do you think a majority of AI entities could be friendly to humans if AIs were granted all the same “inalienable” rights humans enjoy in modern liberal democracies?

    2) What do you think of people overestimating the value of electronics (TVs, phones, etc…) compared to furniture and fixed costs (like the rent)? As in “look at that low income person wasting their welfare on their fancy cell phone and flatscreen tv!” while that fancy cell phone is probably a second hand mid-range model and flatscreen TVs are the only TVs still being made since 2005 or so, with both the cell phone and the TV costing maybe $200 and serving their owner for multiple years, a 1% increase of the rent would cost more. Why do people still view these things as luxuries when the prices of electronics have been very low compared to fixed costs for decades?

    3) You often talk about a hyper-capitalistic future. Do you think about new economics systems (not socialism, nor capitalism, but they might borrow elements from the two) that could be invented in the future? And, if I may ask, why are you not working on this (utility could be greatly increased for EMs if they had an economics system that’s generally better than capitalism, perhaps specifically tailored to the possibilities and realities of an EM world)?

  • Marc Geddes

    It is rational to play the lottery? In my country (NZ), over 1 million people play the national lottery (lotto) on a regular basis.

    My position is that playing the lottery can be rational, even when its a bad bet (negative EV).

    Here’s my reasoning: Provided you are only using a negligible amount of your income, and provided there is a huge reward to relative your income, then playing the lottery constitutes a big potential upside with little downside. I therefore conclude that playing the lottery is rational under these conditions.

    But what about the negative expected value? It’s important to realize that this only applies after you’d bought a huge number of tickets, which would normally take far longer than an ordinary human lifespan. Therefore, over an ordinary human planning horizon, it doesn’t matter.

    • Robert Koslover

      No. It is simply a waste of your time and your resources to play the lottery. However, it may be to your advantage to encourage (hypocritically, of course) others to play it, if (1) you believe that the revenues thereby collected by the government will go to good use, or (2) you make your living by operating lotteries.

      • RobS79

        When you play the lottery you’re renting a dream of being a millionnaire at quite a reasonable rate.

        Also let’s suppose all I care about in life is owning a vintage Ferrari. Spending a small amount every week on a lottery ticket will not deprive me of anything significant — it won’t deprive me of 0.01% of a Ferrari, because you can’t buy a Ferrari in fractions. But it will give me the chance of getting 100% of a Ferrari.

        How is that irrational?

      • IMASBA

        Yes, that’s a nice analysis. Many people playing the lottery do think like that, plus many lotteries support charity. Life has a finite length (so the law of large numbers doesn’t completely apply to individuals) and indeed you can’t buy a Ferrari in pieces.

      • Marc Geddes

        Yes, exactly so!

        The key point is that the expected return is only an average that would never be seen until you’d spent thousands of years playing the lottery. So it’s just not that relevant in the short-term.

        A lottery ticket is very cheap. So in the short-term you lose almost nothing (minimal downside), with a chance to win a big prize (big upside).

        That’s the traders definition of a good trade: little downside, big upside.

      • Matthew Graves

        This is why I like Yudkowsky’s “Waste of Hope” argument so much: to the extent that your life is dependent on your fantasies, it matters which fantasies you spend your time and emotional energy on. http://lesswrong.com/lw/hl/lotteries_a_waste_of_hope/

      • Marc Geddes

        ‘Why You Keep Playing The Lottery’

        http://edition.cnn.com/2012/08/15/health/psychology-playing-lottery-powerball/index.html

        “The lottery industry is often criticized for being an unfair tax on the poor. On average, households
        that make less than $12,400 a year spend 5% of their income on lotteries, according to Wired.

        In 2008, researchers at Carnegie Mellon University attempted to explain why the poor are more likely to buy lottery tickets.

        The study, published in the Journal of Behavioral Decision Making, theorized that people focus on the cost-to-benefit ratio of a single ticket rather than add up the long-term cost of playing over a year, or a lifetime.

        “There are money amounts that are small enough that people almost ignore them,” Loewenstein
        said Wednesday.

        “It almost doesn’t feel real. The lottery and penny slots are kind of the sweet spot of risk taking. They’re really cheap, really inexpensive to play,but there’s a big possible upside.”

        Still, to say that playing the lottery is a bad idea doesn’t sit well with the professor of economics and psychology.

        “It’s ridiculous to say that 51% of the population is just irrational or self-destructive,” he said.

      • IMASBA

        Stephen Diamond

        “But over an ordinary playing horizon, the outcome will be purely negative: most people will have incurred numerous small losses and no monetary gains.”

        They wouldn’t have bought anything life changing with those small amounts, for some people the chance to win millions and the excitement/hope that provides is a more valuable commodity than anything else they could’ve bought with those small amounts of money.

        ” “It almost doesn’t feel real. The lottery and penny slots are kind of the sweet spot of risk taking. They’re really cheap, really inexpensive to play,but there’s a big possible upside.” ”

        It is unreal: conventional economics don’t apply. You see this when people go to court to dispute ownership of a winning ticket. As if an “investment” worth 0.0001% of the winnings somehow gives you “clearly” more of a right to ownership of the winnings than an investment of 0.0000% of the winnings.

      • Stephen Diamond

        They wouldn’t have bought anything life changing with those small amounts, for some people the chance to win millions and the excitement/hope that provides is a more valuable commodity than anything else they could’ve bought with those small amounts of money.

        And many will tell you so, but do you believe them? Do you think their lives improve because of this “excitement and hope”? To me, ridiculous false hopes are just that, and the justifications are rationalizations. Lottery gamblers lose more than they admit from this “non-life-changing” drain on their resources: and the life change they hope for would actually neither change their life, nor the hope for it inspire them to anything but passivity.

      • IMASBA

        “To me, ridiculous false hopes are just that, and the justifications are rationalizations.”

        Yes, TO YOU.

    • Stephen Diamond

      I don’t understand why you don’t have to reduce the upside to the expectation value of the reward.

      The only rational reason I can immediately think of for playing the lottery is when it’s less costly to play than to expend willpower trying to resist playing. (See “Societal implications of ego-depletion theory and construal-level theory: Ignored transaction costs and proliferation of electoral events” — http://tinyurl.com/cgnt4lq .)

    • J O

      I enjoy feeling more financially literate than others by speaking ill of lotteries way more than the feeling of dreaming of a Ferrari. Costs me less too.

      Anyway, I think your reasoning is basically “negative EV is only bad if I notice it”, which sounds dubious. In addition, this reasoning among many people in aggregate causes a (possibly) negative externality.

      • RobS79

        Kind of — I’d phrase it as “negative EV is less bad if I don’t notice it” (perhaps enough so to change the calculus of whether it’s rational or not).

        I wouldn’t actually play the lottery myself. I just think that whether it is rational or not to do so depends on the specifics of the case and the dispositions and preferences of the individual.

      • J O

        Doesn’t that turn it into Pascal’s Wager though? That’s what I perceived Marc Geddes’ position to be.

        What circumstances make buying lottery tickets rational?

    • Stephen Diamond

      But what about the negative expected value? It’s important to realize that this only applies after you’d bought a huge number of tickets, which would normally take far longer than an ordinary human lifespan. Therefore, over an ordinary human planning horizon, it doesn’t matter.

      But over an ordinary playing horizon, the outcome will be purely negative: most people will have incurred numerous small losses and no monetary gains. Distrusting expected- value concepts makes playing the lottery seem even more irrational. (Disregarding small losses is a bias.)

      Why do people play the lottery? Decision fatigue plays a big role. Plus, it comes with a government endorsement.

      Your argument probably implies that you should play the lottery. Do you?

      • Marc Geddes

        >Your argument probably implies that you should play the lottery. Do you?

        Yes, I now buy a ticket on a regular basis.

        Incidentally, a Florida woman just came forward to claim the huge powerball jackpot in the States:

        http://edition.cnn.com/2013/06/05/us/florida-powerball-winner/index.html?hpt=hp_t4

        “Gloria Mackenzie, 84, came forward to claim the second-largest U.S. lottery jackpot, more than two weeks after the $590.5 million Powerball drawing on May 18, Florida Lottery officials announced Wednesday. She passed up a payout spread over 30 years for a somewhat smaller one-time lump sum, pocketing $370.9 million before taxes, Lottery Secretary Cynthia O’Connell said.”

      • Stephen Diamond

        Then you probably are able to report that your life has improved as a result. Presumably, that hope that you will be fabulously rich really “enriches” your life. Then, you’re evidence if not proof that I’m wrong: I really ought to buy lottery tickets (even if I could never bring myself to do so).

        I find this truly astounding; I think we can safely assume you’re not leading us on.

    • tba

      Yes, it can be rational to play the lottery depending on your utility function. For example, if you have $100k and will be killed by the Mafia unless you pay them $1 million, it could be rational to spend all your money on lottery tickets.

      • Stephen Diamond

        I think everyone agrees that it “can be” rational to play the lottery due to the quirks of one’s utility function at the moment or even in general: after all, who’s to say winning the lottery isn’t in some instances a “terminal value.”

        But the claim wasn’t the trivial claim about whether it’s possible to be rational and play the lottery but that it is rational, with a few relatively weak constraints. The argument makes it rational for most of us to play the lottery.

    • Jerome

      I like to think of this as passing money to alternate selves in a multiple universe scenario. Think of it. Infinitely many alternate selves will win and infinitely many alternate selves will lose but the losses are too small to be life changing while the winnings are life changing. I think everyone should buy at least one lottery ticket for a large payout in their lifetime.

    • BobLoblaw

      Money has diminishing returns. Lottery winners generally return to their previous level of happiness months after winning. So not only is the expected value negative, the expected utility would be very negative.

    • Philip Goetz

      If your life has negative utility, and you only defer suicide in the hopes of someday attaining positive utility, it makes sense to play the lottery. You can always kill yourself if you lose.

  • Man flipper

    Robin Hanson,
    What do you think of little boys?