Not sure exactly why, but one of its authors sent me the book Data Science in Context: Foundations, Challenges, Opportunities.
The chapter of most interest to me is ch. 7, on regulation, which highlights these recommendations:
4. Regulate uses not technology
5. Regulate clear, not potential, problems
6. Update laws with data science and technology in mind
7. Consider the impact of economic of scale and the virtuous cycle
8. Create independent, consensus-building institutions for difficult problems
Now points 4,5 are good solid advice that is ignored far too often. Such as re AI today. Yes, regulate the outcomes you care about, not the tech used to achieve them. And yes, trying to anticipate not-yet-realized problems just goes too far wrong too often. And points 6,7 are hard to argue with.
But re point 8, I have more doubts. Sure when setting standards it makes sense to get the affected parties together to negotiate choices. But I worry much more when the main issue is how to protect customers and folks who can’t be well represented at the committee table.
Long ago as a junior employee in aerospace, I noticed that whenever some new issue or buzzword appeared in their world, ambitious folks would try to found or join a “working groups” on that topic. And such groups were eager to offer “recommendations”. Workgroup meetings didn’t take that much time, you didn’t need to be expert on the topic to join such groups, and membership gave you visibility and looked good on your resume. So such groups tended to offer recommendations that sounded socially desirable, while also benefiting their members.
If you are going to regulate, you should probably listen to advice from such “consensus” groups. But maybe you should reject their recommendation to regulate in the first place.
As a co-author of Data Science in Context, I’m pleased to see the thoughtful discussion. For the record, my co-authors and I do not argue against all regulation. Rather, we tried to make sensible suggestions given the complexity of the topic and the potential for harm.
For example, with respect to “Regulate uses, not technology” [4], we are fairly confident that most regulation will, and should, be sectoral. Consider cars: the main regulation of semi- or fully-autonomous cars will relate to how well they abide by the rules of the road, and in particular, how safe they are. While there will be much debate about specific safety standards, liability, and more, it will not be focused on AI or Data Science in general.
Regarding “Regulate clear, not potential, problems” [5], we recognize that sometimes it is much better to avoid a problem than to clean-up after it; however, we have to be very careful, as there are downsides to virtually everything. For example, I doubt that almost any internet service would have been allowed if we had considered every possible downside risk in advance.
Regarding Robin Hanson’s concern with the suggestion that we create “independent, consensus-building institutions for difficult problems” [8], I completely agree with his observations – in effect, that armchair “experts” may do a very poor job. However, we felt we needed to address the fact that (1) internet, data science, and artificial intelligence technologies are often “above-country” so national institutions cannot work, (2) existing international organizations often do not have the right governance or expertise, and (3) both the national and international organizations that are specific to a technology or an application (e.g., standards bodies) sometimes do function well. Perhaps, the reason is there is less politics in the specific than the general.
There is more on these topics our book, which is available both in printed form (say, on Amazon) and online (free) as www.datascienceincontext.com.
Regarding Jack’s comment on the AI Pause Letter, I refer people to a recent Communications of the ACM Blog Post, Why They’re Worried (Kupiec and Struckman, July 17 2023). This is a summary of a longer article by the two students for my Spring class at MIT. (Their complete paper is on the portion of the above website that contains course materials associated with the book.)
It is very silly people think there is no need for regulation. Every industry says you will kill innovation, let us self regulate and when momentum mounts then we will get our representatives to drag their feet for years.
Technology needs regulation based on the damage it is helping spreading false information, creating people addict to dopamine content and giving violent or criminal behavior a platform. Then there is the issue of Ai generated content and human intelectual property; it needs to be label as that and not be copyrighted, as an example.
You are falling for the same trap as long as someone makes money, think of all those fine people doing damage regardless; pharma, tabaco, chemicals, aerospace, transportation, financial and now you need to add technology.