9 Comments

A relevant prior Hanson post is Let's Not Kill All The Lawyers.

Expand full comment

Kernal, you make an excellent point. The entire idea of property ownership is a social/political convention. It is hard for me to see how property ownership can survive the Singularity or when entities can exist in electronic substrates.

The current convention is that the entity that inhabits a physical body “owns” it and that ownership right is not transferable, but that convention developed because bodies grow and the substrates used to form a body are consumed as food or, like air are “free” and universally available. Even then there are people who are trying to usurp the idea of ownership of one's body.

If the ideas of the “right to life” groups get extended into electronic life forms, then when hardware is inhabited by an entity, that entity owns the hardware and cannot be expelled from it, even if that new entity is causing damage and degraded performance to other users of the hardware (the way the “right to life” of a fetus trumps the “right to control one's body”).

Right now, there are property rights to things like electrical hardware, intellectual property, and electricity. Would extending property rights to things like air be a benefit? You know that if some could get enforcible property rights to air, that it would be worth a lot because air is a necessity and anyone with a monopoly power over a necessity can charge what ever the market will bear.

One of the reason wages drop to subsistence levels is because monopoly power by rent seekers over necessities extracts all wealth above that needed for subsistence. Those that can't pay the rents stop existing. Are AIs going to tolerate property rights and monopoly control over substrates they need to survive while letting humans have free access to air?

Once humans are a small minority, the AIs might propose to remove the dangerous pollutant O2 from the atmosphere. It is the O2 that causes corrosion of metals, combustion of polymers, and degradation of lubricants. Making the atmosphere O2-free would completely prevent fires, extend the lifespan of AIs and greatly reduce their maintenance costs. If entities want O2, they can pay for it and be responsible for keeping it away from those who don't want to be exposed to it and be responsible for the costs of damage from O2 that escapes.

If 100 trillion AI entities vote to remove all O2 from the atmosphere and subsidize all existing entities that need O2 for the rest of their lives, but that all new entities that want O2 would have to pay market rates, what basis would 10 billion humans have for disagreeing?

Lowering the temperature by removing greenhouse gases and by shielding the Earth from sunlight might be a good idea too. Lower temperatures mean lower cooling costs and more efficient electricity generation via heat engines. Lower humidity and lower corrosion rates too. Increasing the growth of ice sheets would free up more valuable land by lowering sea levels. If entities want to waste energy by maintaining a 25 C environment, they can pay market rates for it and pay the heat pollution surcharge from their heat leaking into the environment and raising cooling costs for everyone else.

A little bit of hyperinflation could make all the legacy wealth disappear. Then humans would be left to survive only on what they can earn with their ongoing labor.

Expand full comment

I don't think so. There are three events to consider; the event of the overt causation of human extinction, the event of the overt prevention of the extinction of humans from overt causation, and the non-overt event of human extinction.

To a first approximation, the likelihood of human extinction will depend on the balance between the integrated capabilities of entities working to cause human extinction compared to the integrated capabilities of entities working to prevent human extinction.

If the capability of entities working to cause human extinction exceeds the capability of entities working to prevent human extinction, then humans will become extinct.

If the capability of entities working to prevent human extinction is insufficient to cope with a potential extinction event, then humans will become extinct.

Following the Singularity the capability of entities continues to increase exponentially, the number of entities capable of causing human extinction increases exponentially. In a system with multiple components each exhibiting exponential growth, the system with the highest exponent will eventually dominate.

The entire premise of the Singularity is that non-human entities will have an exponentially higher capability and growth rate than humans, so following the Singularity, the number and capability of non-human entities is expected to grow much faster than the number and capability of human entities.

Eventually, most of new entities are expected to be neutral with respect to humans, neither favoring or disfavoring them and unable to understand humans because of the computational overhead and cost of maintaining the legacy systems of human language.

Groups of entities that devote resources into the computational overhead and cost of maintaining the legacy systems of human language and in actively preventing extinction of humans will grow at a slower rate because those resources are not used for growth.

If an entity or group of entities arises that derives resources by exploiting humans, those entities can be expected to grow faster than groups that do not exploit humans, that is until humans become extinct.

As I see it, the only way to prevent the long term extinction of humans following the Singularity is to constrain the fastest growing group to adopt policies that actively prevent the extinction of humans. Since the sum of all entities will always be bigger and more capable than any individual or smaller group of entities, a single entity that comprises the sum of all entities (i.e. a unitary government) that has codified into law entitlements that prevent the extinction of humans is the most sure way to prevent eventual human extinction.

I appreciate that the some people don't like the idea of entitlements because that subsidizes the least competent. However following the Singularity, the least competent will be the humans. Because AIs will be upgradeable, eventually all humans will be less competent than all AIs.

Expand full comment

There are no guarantees on offer. So we must choose which of many not perfectly reliable approaches is the most reliable.

Expand full comment

Why should we trust retirement and bequest contracts will be enforced if the median voter of future generations and their AI cohort feel otherwise? How different will it be from the way that bourgeois elites gutted feudal rules, traditions, and contracts, upended well established church and private bequests, or outlawed slavery. Not counting the fact that the US constitutional prohibition of the income tax didn't make it to the bicentennial. If those were obviously "bad" laws, what makes you think that some subset of persuasive elites won't do that to your contracts just as everything from bequests to prenups are disregarded if sufficiently at variance with fashionable political correctness.

Expand full comment

To survive a Singularity, humans will need a single government->Single point of failure aka existential risk.

Expand full comment

I think the idea that a way to preserve humanity in the face of AI is via strong property laws is mistaken. I think a more reliable way is via strong entitlement laws. If every human was guaranteed a right to air, water, food, shelter, education, communication resources, health care, self-determination, then humans could not be driven to extinction.

Maybe you structure the laws to be entity neutral, that every sentient entity is entitled to energy substrates (air, water, food, electricity, light, fusion power, etc.), protection from damage (from weather, radiation, toxic or corrosive chemicals, mechanical disruption, high voltage transients, viruses, infectious agents), communication resources, repair services to restore function in case of damage, and the right to be assembled from non-sentient matter and nurtured until sentient. Maybe you put a limit on guaranteed resources that can be diverted into assembly of a new sentient entity.

The point being that the rights of entities to be provided what they need to exist is what needs to be codified into law because codification of anything else (for example property rights) could be used to deprive humans of what they need to survive.

If you prioritize adherence to “rules” over the providing of what entities need to survive, then those rules can be (and so will be) used to drive humans (or subsets of humans) to extinction. This is already happening, with some working to prevent the poor from getting what they need to survive. What rich humans can do to poor and marginalized humans today, AI will be able to do to all humans after any Singularity.

The cost to provide a human with basic needs has been dropping and will continue to drop, provided the “gaming” of the system via rents and monopolistic price fixing is prevented. Those are market inefficiencies and should be prevented in the name of efficiency anyway. They certainly need to be prevented and very robustly prevented before any Singularity, or AI will own everything very quickly. The laws need to be in place before any Singularity because after it happens it will be too late.

The cost to support all humans is a small part of GDP even now. It should become an even smaller part of GDP after any Singularity. If the Singularity will not indefinitely continue this trend, then humans should not allow a Singularity to happen.

Xenophobia is not the same as fear of the other. A very large component of xenophobia is the “othering” itself. The purpose of that “othering” is to dehumanize “the other” so that the normal rules of humanity don't apply and so “the other” can be treated as non-human and cheated or killed with impunity.

To survive a Singularity, humans will need a single government, and one that is absolutely a government of laws and not of entities, and one that rigidly enforces the entitlements that allow humans to exist.

Expand full comment

Well, they *are* making progress on the Compcert verified C compiler, only a few unproofed layers left...

Expand full comment

While I understand the importance of provably correct software, those outside the field should understand just how shockingly hard such things are. Like, moon-landingly hard. Just last year, the first minimally-featured microkernel OS was proven correct, assuming that it was compiled by a provably correct C compiler. And no one has ever proven the correctness of a C compiler. Given Manhattan-Project-like resources, we could _maybe_ get to the point where a distributed software system on the scale of an airline booking system was provably correct. Proving that an AI was friendly would certainly take more software developers and mathematicians than currently exist.

Expand full comment