5 Comments
Aug 20, 2023·edited Aug 20, 2023

As a co-author of Data Science in Context, I’m pleased to see the thoughtful discussion. For the record, my co-authors and I do not argue against all regulation. Rather, we tried to make sensible suggestions given the complexity of the topic and the potential for harm.

For example, with respect to “Regulate uses, not technology” [4], we are fairly confident that most regulation will, and should, be sectoral. Consider cars: the main regulation of semi- or fully-autonomous cars will relate to how well they abide by the rules of the road, and in particular, how safe they are. While there will be much debate about specific safety standards, liability, and more, it will not be focused on AI or Data Science in general.

Regarding “Regulate clear, not potential, problems” [5], we recognize that sometimes it is much better to avoid a problem than to clean-up after it; however, we have to be very careful, as there are downsides to virtually everything. For example, I doubt that almost any internet service would have been allowed if we had considered every possible downside risk in advance.

Regarding Robin Hanson’s concern with the suggestion that we create “independent, consensus-building institutions for difficult problems” [8], I completely agree with his observations – in effect, that armchair “experts” may do a very poor job. However, we felt we needed to address the fact that (1) internet, data science, and artificial intelligence technologies are often “above-country” so national institutions cannot work, (2) existing international organizations often do not have the right governance or expertise, and (3) both the national and international organizations that are specific to a technology or an application (e.g., standards bodies) sometimes do function well. Perhaps, the reason is there is less politics in the specific than the general.

There is more on these topics our book, which is available both in printed form (say, on Amazon) and online (free) as www.datascienceincontext.com.

Regarding Jack’s comment on the AI Pause Letter, I refer people to a recent Communications of the ACM Blog Post, Why They’re Worried (Kupiec and Struckman, July 17 2023). This is a summary of a longer article by the two students for my Spring class at MIT. (Their complete paper is on the portion of the above website that contains course materials associated with the book.)

Expand full comment

It is very silly people think there is no need for regulation. Every industry says you will kill innovation, let us self regulate and when momentum mounts then we will get our representatives to drag their feet for years.

Technology needs regulation based on the damage it is helping spreading false information, creating people addict to dopamine content and giving violent or criminal behavior a platform. Then there is the issue of Ai generated content and human intelectual property; it needs to be label as that and not be copyrighted, as an example.

You are falling for the same trap as long as someone makes money, think of all those fine people doing damage regardless; pharma, tabaco, chemicals, aerospace, transportation, financial and now you need to add technology.

Expand full comment

The problem with the working group approach is that either (a) a member has no skin in the game on the outcome, in which case their incentives are to resume-pad and/or virtue signal, or (b) a member does have skin in the game, in which case they advocate according to their direct interests. In neither case should we expect an "optimal" recommendation to result.

A good recent example is the "Pause Giant AI Experiments" open letter from the Future of Life Institute, now with over 33k signatories. It's great PR but useless as a way to improve AI safety, and it's impossible to enforce. The major signatories are either competitors to OpenAI, or academics who would likely benefit if more AI Safety research grants were sprinkled around.

Expand full comment