30 Comments

Moloch is not as you describe it: it's about having incentives to make individually rational decisions that make everyone worsen off (like the parable of the fish farmers who all polite their lake).It's related to the prisoners Dilemma, where you should defect if you think the other prisoner will defect, even though c-c is the best outcome. Solutions to Molochian problems include government imposing the desired outcome, eg. banning pollution, and people being more nice and co operative as a value system.

Expand full comment

I have no clear idea of why you think the limit of competition better than some random AI.

Expand full comment

The link to the post discussing "risks in trying to induce a single AI to grow crazy fast and then conquer everything" doesn't actually link to a post where you discuss said risk. The post only goes into the implausibility of such an event occurring, and how you think we shouldn't be worrying right now. "Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?"- your conclusion. Did you mean to link to some other post where you do explore the risks in trying to grow a single AI crazy fast? I was under the impression you didn't think this was possible at all

Expand full comment

"Some being who values it's own existence" is what we get anyway. If we aim for the AI god and go wrong, we make a paperclip maximizer, which values it's own existence for the sake of making paperclips. Both the AI god gone wrong and the Hanson economy gone "right" have minds utterly unlike that of any human, spreading throughout spacetime. If we want anything resembling love or humor to exist, we need the nice AI god.

Yes our values are vague complicated and fuzzy. Aiming and not getting quite right is still better than not aiming at all. Closing your eyes, pretending to value that which you don't actually value, isn't helpful.

I want a world, where if I somehow got flung a million years into the future I would find somewhere nice to live.

I want a world where I personally live (or at least something of transhuman but still sort of me, with complicated rules about what changes are improvements.)

I want something unambiguously nice, not philosophical copeium.

So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.

That is drawing a target around an arrow and claiming a bullseye.

Expand full comment

We may. But that possibility doesn't by itself justify crazy risk attempts to make an AI god.

Expand full comment

This is amazing. Especially the last two paragraphs.

Expand full comment

This seems somewhat inconsistent with posts where you've argued that we may end up devoting the vast majority of negentropy to wars.

https://www.overcomingbias....

Expand full comment

In certain economic circumstances in first-world countries, perhaps.

Expand full comment

If an actual god wanted to conquer us, we'd be conquered, wouldn't we :)

Expand full comment

actually, you are mostly just terrified of socialists winning because you wanna keep the money you dont deserve

Expand full comment

...This makes our planet a roughly a once-per-million-galaxy rarity...

What's the rationale behind this? I can see either calculating this based on the # of galaxies we can confidently say don't have life in them, or counting the total number of observable galaxies in the universe, but neither of those numbers are even close to one million.

Expand full comment

We have not fondly embraced wannabe gods who seek to conquer us, claiming that they would afterward rule benevolently.

Expand full comment

But he did NOT show that there would be such a race to the bottom. He merely claimed such.

Expand full comment

Deforestation is something that can be managed... if the forest is private property.

Expand full comment

"Not" doing something isn't really a "tradition" unless doing that was ever a possibility (fasting is a tradition, starving in a famine is not).

Expand full comment