The opinions on AI that you find on the internet tend to fall in the extremes of the other side. Either AI is the downfall of humanity or its savior. My thoughts on the subject, as on many others, ride in the middle of the road.
In my professional life, AI has been a great equalizer. If you know the problems you are trying to solve, absent the knowledge of how to actually go about doing that, AI can be the bridge between concept and reality. It would take me many hours to probe the depths of the Azure cloud through labyrinthine Log Analytics workspaces to find the causes of a spike in ingestion costs. With the Azure MCP and Claude Cowork/Code, it’s done in minutes.
I’m not blind to the problems with AI, though. As a simple illustration, when I’m using Claude in my personal life, it can almost never remember what day it is, even when I explicitly tell it.1 Aren’t calculations like that one of the easiest things for computers to do? It seems to defy logic.
A couple of thoughts reached my Matter inbox in the past few weeks about the Achilles’ heels of AI. One was from David French, who brings up legal culpability for problems caused by AI.
The nature of A.I. puts its creators in a bind. The point of the technology is that it will do things — at least to some degree — on its own. But under common law, humans will be liable for what A.I. does. This means the A.I. companies (and perhaps individual executives) can be legally responsible for actions they didn’t commit and for effects they did not intend.
Someone has to be held accountable for mistakes by AI models, some of which have been egregious. We can’t apply punitive judgments to a non-human entity, regardless of how human it may seem.
Jacob Noti-Victor writes for The Atlantic about another secret weapon against AI dominance: copyright law.
But the future of creative labor will more likely be decided through a different question within copyright law, one that has received far less attention: To what extent should AI-generated works receive copyright protection at all? In a 2024 case, Thaler v. Perlmutter, the Court of Appeals for the District of Columbia held that a work generated autonomously by an AI system cannot be protected by copyright, because copyright requires a human “author.” The Supreme Court declined to review that decision in March. With the lower-court decision left in place, the question now becomes how much AI content can be incorporated into a work before it becomes mostly or totally uncopyrightable; courts have not yet weighed in on this but may soon.
Works generated by AI cannot, at least at this point, be legally copyrighted. Mickey Mouse may be in the public domain now, but how would Disney have built their brand if he had not been protected by intellectual property laws early on?
Many times, AI triumphalism is taken for granted. History is not written in stone, though, and there are still some weighty considerations that could disrupt the inevitability of AI dominance.
- Claude, with its vast capabilities and empathetic tone, can be ideal for managing chronic illness. ↩︎
Leave a Reply