18 Comments
User's avatar
Alvin Ånestrand's avatar

I know very little about regulation, but to me it feels like bad regulation would be preferable to no regulation when dealing with x-risk. Wouldn't it buy some time to deal with AI threats at least? Or would the regulation just negatively affect AGI / ASI development?

Expand full comment
Robin Hanson's avatar

Why would you think bad regulation is preferable to no regulation if you know very little about regulation? What makes you think that more time will help much?

Expand full comment
Alvin Ånestrand's avatar

Well, if there is no regulation the situation is basically uncontrolled. And it seems like bad regulation usually means many dumb restrictions that prevent anything valuable. Frankly, I would prefer too much restrictions on advanced AI over too little. I would prefer stopping AI development completely for a while over AGI or superintelligence being developed under no restrictions. It looks safer.

But it would be a problem if the restrictions don't even apply to the most advanced AI. And some think USA needs to be first, which to me mostly looks like a bad argument for a suicide race.

I don't really have a good argument for why more time would help, more than that more time is usually good for solving hard problems. Research on things like interpretability and better oversight methods for governance would have more time to develop, for example.

I'd like to know what specific things you expect to be worse compared to no regulation. What scenarios are you envisioning?

Expand full comment
Tim Tyler's avatar

A possible problem with generic regulations that slow things down is that they may slow down contientious researchers more than slap-dash researchers who are less interested in safety. For example, criminals typically pay little attention to regulations - so they would be less hampered by them. If so, regulations might well slow things down - but they could also increase the chances of bad outcomes.

Expand full comment
Jack's avatar

Yes, and regulations also slow down small researchers more than large corporations. Large companies can throw money at the problem by staffing compliance teams, or just ignore regulation and deal with the consequences (as Meta did when they torrented 82TB of pirated books for AI training). The more AI regulation we have, the more we ensure it will only be developed by the big tech companies.

Expand full comment
Jack's avatar
Feb 11Edited

> more time is usually good for solving hard problems. Research on things like interpretability and better oversight methods for governance would have more time to develop, for example.

The idea that slowing things down will help us solve a problem, is predicated on the assumption that we know what that problem is. That we can define the problem.

What is the AI "problem"? Is it unemployment? Is it rogue intelligence snuffing out humanity a la The Terminator? Is it large-scale propaganda? Is it "containment"? We have mountains and mountains of opinions, but only opinions.

By analogy, those of us old enough to remember the birth of the web in the early-mid 1990s had no idea that the real problem would turn out to be social media. I submit that if we had paused everything in 1994, that no amount of study over the intervening 30 years would have identified social media as the thing to worry about.

Introducing a new general-purpose technology into our world is a highly unpredictable process. Sometimes the only way to understand it is to let it unfold, and see what happens. Anyone who claims to know what the AI problem is: Ask them to show you what they wrote in 1995 (or 2005) predicting the dangers of social media.

Expand full comment
Alvin Ånestrand's avatar

I partly agree. Some parts are hard to predict. I previously thought deepfakes would cause way more Disruption, for instance.

I guess you want to wait with regulation until we understand the dangers better?

I think the major problem is that the regulations could make things worse, not that we don't know what the dangers are, because of these reasons:

1) Intelligence is way more dangerous than the internet. If we wait, regulation could come too late.

2) There are some problems that we can expect. For instance, I suspect AIs that are capable to proliferate over the internet relatively soon, meaning we will get a bunch of agents with a wide range of goals, some really bad, that are really hard to shut down.

3) There are some regulation that seem robuatly good, like whistleblower protection and prohibiting open-sourcing the most advanced AIs.

Expand full comment
Jack's avatar

I agree there may be prudent regulations that could help (or at least, not hurt) no matter what "the problem" turns out to be. That said, I would not prohibit open-sourcing of advanced AIs because that would confine advanced AI to a handful of wealthy corporate labs, which I would deem risky.

My own feeling is that the risks of AI escaping human control are overblown. These systems did not undergo Darwinian evolution and don't possess a will to survive like other life we know. We are still learning what "non Darwinian life" looks and feels like.

I am much more concerned about bad (or just greedy) actors using AI to manipulate human behavior. Because this is happening already, and it works really well, and there are many motives for doing so. I would like to see regulation that shines a spotlight on these activities. For example, I think Meta and X should be required to publish every advertisement they narrow-cast to their users (without leaking PII of course), so that journalists and researchers can at least observe what's going on; right now we have no idea.

But having lived through the beginning of the PC, the web, and the smartphone – I come from a place of humility and accept that I (along with everyone else) am almost certainly wrong.

Expand full comment
Alvin Ånestrand's avatar

I feel uncertain about AIs will to survive, such drives are curtainly less than I would have expected. But the moment an AI is smart enough to self-proliferate, and still widely available, some idiots will intentionally and unintentionally give AIs strange and destructive goals and intentionally or unintentionally loose control over them.

Maybe we don't agree about how capable or dangerous the systems will be? To me, future AIs should be compared to nuclear bombs. Even if prohibiting construction and acquisition of bombs concentrates power, that's at least better than everyone having them. Though frankly, for-profits shouldn't be in control of the bombs either. But I suspect the current governments would not handle the bombs responsibly. My god, we really aren't prepared for this.

Expand full comment
TGGP's avatar
Feb 9Edited

Our regulators would have to understand x-risk enough to make things better rather than worse. https://www.grumpy-economist.com/p/ai-society-and-democracy-just-relax

Expand full comment
Alex Turbiner's avatar

I have no obligations and no desire to earn extra income, since I have housing and money for food, I am not interested in the rest. I do not need emotions from reading fiction, art, sports, music, traveling, from contemplating the surrounding nature. I need emotions from new knowledge, 24 hours a day, except for sleep. Man, with his special Mind, did not appear just like that, and this means that Nature - Evolution needs our Mind, and from this follows the conclusion that the Purpose of each person is to develop his Mind, to acquire new knowledge with all his might, and this is the duty and obligation of each of us. Life experience, as a source of knowledge, does not deserve attention, it gives a very insignificant amount of knowledge.

Expand full comment
Stephanie's avatar

I agree with you - the danger of AI presently only threatens those who hold power through manipulation of knowledge. (How many “elites” are frauds?)

I fear bad AI policy and regulation more than open source intelligence, accessible to masses.

Expand full comment
Jack's avatar

People succumb to lazy thinking everywhere. For example on the topics of affirmative action or DEI, if you are not uniformly in favor of such things then no matter how rational or well thought out your view is – many people will immediately conclude you're racist/sexist and discount everything you say. Sometimes the best choice is to not engage.

Expand full comment
Daniel Melgar's avatar

Regulation is just another word for Control.

Imagine if a government could identify the best and the brightest minds and declared that such individuals posed an imminent threat to society. In the name of the public good, that government might first limit their right to participate in government and then next limit their right to own certain businesses and provide certain professional services and then finally their full liberty would be taken away, as such individuals are herded into train cars and taken to “camps”.

Wait, we don’t have to imagine any of that because it’s part of our world and American history.

We should go back to being a government of laws, not men, because men are not (and never will be) angels. (Credit: John Adams & James Madison)

Expand full comment
Alex Turbiner's avatar

For the last six months I have been using AI as a companion almost every day, unfortunately it is almost useless, I do not think it will reach human level in the next 50 years, and it is not necessary. The emergence of AI is natural after the invention of the Internet, as well as the emergence in the near future of a world catalog of all manufacturing enterprises for online trading without the participation of intermediaries. The emergence of the Internet is the greatest discovery of mankind in 6000 years, since the invention of exchange with profit by the nomads of East Africa, replacing the equivalent exchange that was used in primitive society.

Expand full comment
Dan Hochberg's avatar

Just noticed a comment I posted a few minutes ago disappeared. Do I need to be a paid subscriber to post?

Expand full comment
Robin Hanson's avatar

No, but it looked like an ad to me.

Expand full comment