Thinking about AI regulation
The emphasis should be on enhanced protections for the intellectual product and reputations of real human beings.
I’ve been a radical regarding internet privacy.
I believe that we should have the equivalent of a property right for data relating to our internet activities. No one should be able to collect, use, or sell that data without our express approval (and possibly compensation). Moreover, providing such approval shouldn’t be allowed as a precondition of using internet services or accessing websites.
The big social media companies claim that much lesser privacy restrictions than these would destroy the internet and social media as we know it. To me, that’s a benefit, not a drawback. The internet as we know it is highly useful and beneficial, but also somewhat creepy. It can still be useful and beneficial without cyber stalkers following us around.
A similar perspective should inform the discussion about regulating artificial intelligence.
I don’t share the concerns about AI rendering all of us unemployed or turning on us and destroying humanity.
Throughout history, disruptive technologies have broadly expanded wealth and economic opportunities. I love George Will’s line that the future is like the past until it isn’t. But AI would seem to offer the same prospects as predecessor disruptive technologies. It is more a threat to white collar jobs than blue collar ones. And its primary economic effect is likely to be to make white collar workers more productive.
However, AI does have the potential to turbocharge disinformation, defamation, and plagiarism. And existing copyright and liability laws don’t seem sufficient to restrain or remedy that.
All AI content should be required to be labeled as such, as fair warning to readers and viewers. AI content should have to credit and cite original sources, as with an academic paper.
To the extent AI content purports to be in the style of a real person, that it is an impersonation should be clearly disclosed. Generating and disseminating AI content purporting to be from or featuring a real person without such a disclosure should be a serious and high priority crime.
In a recent Senate hearing regarding AI regulation, Josh Hawley had a perspective worthy of consideration and development. According to reporting by The Dispatch, Hawley postulated the alternative of increasing civil liability for AI offenses rather than relying on an army of bureaucrats to police a technology likely to be faster and more nimble than regulators.
That would leave deterrence and remediation largely in the hands of those aggrieved by AI offenses. Liability for defamation and failure to credit and cite original sources should rest jointly with the creator and provider of the AI service and with those who maliciously used it. That would give AI providers a great incentive to create their own internal checks on disinformation, defamation, and plagiarism, rather than rely so much on regulators to devise and impose them. The mistake that was made with social media companies, giving them wholesale statutory immunity regarding the content published using their platforms, shouldn’t be repeated with AI.
Our tort system has become too expensive to access when the potential liability isn’t pretty large. So, while the tort system should remain available for those aggrieved by an AI offense, a less expensive and much quicker administrative adjudication would probably be necessary if civil liability is to be the primary deterrent and remedy for AI offenses.
This is still thinking out loud time. But it seems to me that the regulatory approach shouldn’t be so much an attempt to limit the development or use of AI. Instead, it should be to enhance legal protections for the intellectual product and reputations of real human beings.
Reach Robb at robtrobb@gmail.com.