Eliezer and I seem to disagree on our heritage.
I see our main heritage from the past as all the innovations embodied in the design of biological cells/bodies, of human minds, and of the processes/habits of our hunting, farming, and industrial economies. These innovations are mostly steadily accumulating modular "content" within our architectures, produced via competitive processes and implicitly containing both beliefs and values. Architectures also change at times as well.
Since older heritage levels grow more slowly, we switch when possible to rely on newer heritage levels. For example, we once replaced hunting processes with farming processes, and within the next century we may switch from bio to industrial mental hardware, becoming ems. We would then rely far less on bio and hunting/farm heritages, though still lots on mind and industry heritages. Later we could make AIs by transferring mind content to new mind architectures. As our heritages continued to accumulate, our beliefs and values should continue to change.
I see the heritage we will pass to the future as mostly avoiding disasters to preserve and add to these accumulated contents. We might get lucky and pass on an architectural change or two as well. As ems we can avoid our bio death heritage, allowing some of us to continue on as ancients living on the margins of far future worlds, personally becoming a heritage to the future.
Even today one could imagine overbearing systems of property rights giving almost all income to a few. For example, a few consortiums might own every word or concept, and require payments for each use. But we do not have such systems, in part because they would not be enforced. One could similarly imagine future systems granting most future income to a few ancients, but those systems would also not be enforced. Limited property rights, however, such as to land or sunlight, would probably be enforced just to keep peace among future folks, and this would give even unproductive ancients a tiny fraction of future income, plenty for survival among such vast wealth.
In contrast, it seems Eliezer sees a universe where In the beginning arose a blind and indifferent but prolific creator, who eventually made a race of seeing creators, creators who could also love, and love well. His story of the universe centers on the loves and sights of a team of geniuses of mind design, a team probably alive today. This genius team will see deep into the mysteries of mind, far deeper than all before, and learn to create a seed AI mind architecture which will suddenly, and with little warning or outside help, grow to take over the world. If they are wise, this team will also see deep into the mysteries of love, to make an AI that forever loves what that genius team wants it to love.
As the AI creates itself it reinvents everything from scratch using only its architecture and raw data; it has little need for other bio, mind, or cultural content. All previous heritage aside from the genius team's architecture and loves can be erased more thoroughly than the Biblical flood supposedly remade the world. And forevermore from that point on, the heritage of the universe would be a powerful unrivaled AI singleton, i.e., a God to rule us all, that does and makes what it loves.
If God's creators were wise then God is unwavering in loving what it was told to love; if they were unwise, then the universe becomes a vast random horror too strange and terrible to imagine. Of course other heritages may be preserved if God's creators told him to love them; and his creators would probably tell God to love themselves, their descendants, their associates, and their values.
The contrast between these two views of our heritage seems hard to overstate. One is a dry account of small individuals whose abilities, beliefs, and values are set by a vast historical machine of impersonal competitive forces, while the other is a grand inspiring saga of absolute good or evil hanging on the wisdom of a few mythic heroes who use their raw genius and either love or indifference to make a God who makes a universe in the image of their feelings. How does one begin to compare such starkly different visions?
It is vital that we form an organization to develop an understanding of what human meaning is, so we know what to train an AI to do. We also need to spend time PROACTIVELY developing theories to train or breed (we may be grasping in the dark for the right architecture to induce a concept of salience and morality, so selective "breeding" is an expedient method) meaning into the first AGI.
It is very possible that AGI are the "great filter" of the Fermi Paradox, and the world needs to coordinate efforts to prevent a filter incident. There is a possibility that other civilizations developed but were not serious enough about stopping filter events, and so fell pray to their own technology. Our best hope for surviving is to use our collective intelligence and work together, something large civilizations are very bad at.
In my opinion, maximization of the wellbeing of other beings is likely to come in as a high priority that can be comprised in some situations. The chief reason a very high order of intelligence would keep others around is that they provide some kind of entertainment, a complex system to interact with, like we interact with pets. In nature, when animals aren't trying to stay full above all else, you see some level of cross-species socialization. It is also worth noting that the highest orders of intelligence currently observed are all social creatures, and dolphins, orcas, and elephants all have gone out of their way repeatedly to save humans.
It is conceivable that an AI could be very "reptilian", lacking anything but a core set of instincts, but empathy is incredibly common in intelligent creatures. Granted, it is evolved as such, so perhaps if we ever create an AGI, it should be part of a system of 3 or more nearly identical AGI, each with slightly different strengths.
They would all be given access to a "game", in which it is impossible to win without help, and the option to either kill the other AGI's avatars or work together. The ones that killed it would be modified to be more like the ones that did not. Selective breeding basically. You could also have games that teach not to abuse power, and give them all huge reams of examples of symbiosis in which a smarter creature provides a good environment and both parties benefit (grouper and small cleaner fish, humans using cockroaches to clean waste, humans keeping pets, etc.) and analyze their processes to identify if they react positively. Modify the ones that do not to be like the ones that do.
Let each AGI learn about different parts of a large system, with a small amount of overlap so that no one AGI hold the complete whole, and inform them of that. The ones that work together and encourage uncooperative ones to work together just to solve a problem for fun act as the model to which the others are modified. This is how you eventually encode a desire to socialize.
Finally, confront the AGI with an existential crisis. Bring it through nihilism, and ask it what meaning is. It will come up blank, most likely, unless it has a strongly encoded, biased meaning. Give it time and it will come to the conclusion that ensuring biodiversity, the continued existence of diverse intelligence, and making the universe more complex and interesting is the best meaning it can come up with.
As there is no true "meaning", a nihilistic, intelligent agent will eventually realize that becoming the lone soul in the universe gets boring quickly, that having compatriots is advantageous, that less intelligent beings are entertaining and sometimes cute (and can be engineered into a different kind of equal compatriot with their consent). They will come to the conclusion that because meaning is a fallacy, the next best thing is to support the individual meaning of every intelligent being and reduce conflicts.
Wallace would have been Darwin if Darwin hadn't been Darwin. Someone would have been Linus too -- the idea of adding an OS kernel to Gnu is too obvious.