Something must be done. This is something. Therefore, this must be done. (more)
The White House recently requested input on AI. I responded:
What should be “U.S. policy for sustaining and enhancing America's AI dominance in order to promote human flourishing, economic competitiveness, and national security”?
Do nothing. Whatever we are now doing, stop that. Don’t especially subsidize or regulate AI. Just allow a global free market in AI. AI has not shown special risks, harms, or features that deserve special consideration. So deter any AI harms, and promote AI benefits, via the usual laws and legal liability that apply to tech and commerce in general.
Now I do in fact have AI-specific policy proposals that I think would improve on doing nothing, such as robots-take-jobs insurance and foom liability. So why didn’t I say so?
Society can be usefully divided into masses, experts, and elites, and in democracy the masses get the policies they want, while all the rest is set by lobbyists, agency personnel, think tank advisors, and elite media. These are all elites, except when they choose to defer to experts.
As masses now have little understanding or experience of AI, I fear that the only message that will get through to masses now via political channels is the binary “Is there a problem here?”. If masses aren’t persuaded that there is a substantial problem, little will happen, but if they are, then elites will fight over what to do.
This fight won’t be much based on experience of AI problems, of which there is little, nor drawn from experts with proven effectiveness re such problems, which also doesn’t exist. In this sort of situation, I predict AI government interventions to be bad, worse than doing nothing.
Government intervention, beyond doing nothing, tends to go best when ordinary folks and experts have lots of experience with the area, and the main value we get from it is directly and frequently visible to us. Such as with water, sewer, gas, electricity, phone, internet, roads, mail, and other common city services.
It tends to go badly, worse than doing nothing, in areas like nuclear energy, global warming, genetic engineering, zoning, education, and medicine, where experience is less or the main value is sacred or hard to see. There we mostly see elite grift, grabbing for power and money with little consideration of subtle policy tradeoffs.
In a forum like a White House request for input, I expect whatever I say to be compressed into the binary “is there a problem here?”. Talking about special AI insurance and liability would be compressed into “yes”, which would then push us toward bad AI policy. Which is why I instead said “no.” Meaning “no, there isn’t a problem here that our government can, if unleashed, be expected to mitigate.”
When you expect the public to be unable to consider subtle options, and elites to be incompetent, you point the public toward the best of the simple options they will be able to consider. Which in this case is do nothing.
I know very little about regulation, but to me it feels like bad regulation would be preferable to no regulation when dealing with x-risk. Wouldn't it buy some time to deal with AI threats at least? Or would the regulation just negatively affect AGI / ASI development?
I have no obligations and no desire to earn extra income, since I have housing and money for food, I am not interested in the rest. I do not need emotions from reading fiction, art, sports, music, traveling, from contemplating the surrounding nature. I need emotions from new knowledge, 24 hours a day, except for sleep. Man, with his special Mind, did not appear just like that, and this means that Nature - Evolution needs our Mind, and from this follows the conclusion that the Purpose of each person is to develop his Mind, to acquire new knowledge with all his might, and this is the duty and obligation of each of us. Life experience, as a source of knowledge, does not deserve attention, it gives a very insignificant amount of knowledge.