In a universe that is (so far) almost entirely dead, we find ourselves to be on a rare planet full not only of life, but now also of human-level intelligent self-aware creatures. This makes our planet a roughly a once-per-million-galaxy rarity, and if we ever get
Moloch is not as you describe it: it's about having incentives to make individually rational decisions that make everyone worsen off (like the parable of the fish farmers who all polite their lake).It's related to the prisoners Dilemma, where you should defect if you think the other prisoner will defect, even though c-c is the best outcome. Solutions to Molochian problems include government imposing the desired outcome, eg. banning pollution, and people being more nice and co operative as a value system.
I have no clear idea of why you think the limit of competition better than some random AI.
The link to the post discussing "risks in trying to induce a single AI to grow crazy fast and then conquer everything" doesn't actually link to a post where you discuss said risk. The post only goes into the implausibility of such an event occurring, and how you think we shouldn't be worrying right now. "Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?"- your conclusion. Did you mean to link to some other post where you do explore the risks in trying to grow a single AI crazy fast? I was under the impression you didn't think this was possible at all
"Some being who values it's own existence" is what we get anyway. If we aim for the AI god and go wrong, we make a paperclip maximizer, which values it's own existence for the sake of making paperclips. Both the AI god gone wrong and the Hanson economy gone "right" have minds utterly unlike that of any human, spreading throughout spacetime. If we want anything resembling love or humor to exist, we need the nice AI god.
Yes our values are vague complicated and fuzzy. Aiming and not getting quite right is still better than not aiming at all. Closing your eyes, pretending to value that which you don't actually value, isn't helpful.
I want a world, where if I somehow got flung a million years into the future I would find somewhere nice to live.
I want a world where I personally live (or at least something of transhuman but still sort of me, with complicated rules about what changes are improvements.)
I want something unambiguously nice, not philosophical copeium.
So empirically, so far, given pretty long time periods, competition has just not remotely destroyed all value.
That is drawing a target around an arrow and claiming a bullseye.
We may. But that possibility doesn't by itself justify crazy risk attempts to make an AI god.
This is amazing. Especially the last two paragraphs.
This seems somewhat inconsistent with posts where you've argued that we may end up devoting the vast majority of negentropy to wars.
In certain economic circumstances in first-world countries, perhaps.
If an actual god wanted to conquer us, we'd be conquered, wouldn't we :)
actually, you are mostly just terrified of socialists winning because you wanna keep the money you dont deserve
...This makes our planet a roughly a once-per-million-galaxy rarity...
What's the rationale behind this? I can see either calculating this based on the # of galaxies we can confidently say don't have life in them, or counting the total number of observable galaxies in the universe, but neither of those numbers are even close to one million.
We have not fondly embraced wannabe gods who seek to conquer us, claiming that they would afterward rule benevolently.
But he did NOT show that there would be such a race to the bottom. He merely claimed such.
Deforestation is something that can be managed... if the forest is private property.
"Not" doing something isn't really a "tradition" unless doing that was ever a possibility (fasting is a tradition, starving in a famine is not).