How To Regulate AI, If You Must


Jon Stokes argues that “The rules are definitely coming, so let’s make sure they lead to a future we want.”

AI is both extremely powerful and entirely new to the human experience. Its power means that we are definitely going to make rules about it, and its novelty means those rules will initially be of the “fighting the last war” variety and we will mostly regret them.

While we do not get to pick whether or not AI rules will exist (a certainty) or whether our first, clumsy stab at them will be a backward-looking, misguided net negative for humanity (also certain), the news isn’t all bad. In fact, we’re in a moment of incredible opportunity that comes around rarely in human history: those of us building in and writing about AI right now get to set the terms of the unfolding multi-generational debate over what this new thing is and what it should be.

But as much as I’d like to ramble on about LLMs as a type of can-opener problem, and explore what it would look like to develop a new companion discipline to hermeneutics that’s aimed at theorizing about text generation, the rule-making around AI has already started in earnest, and I am by and large not a fan of the people who are making the rules and I am not expecting good results.

⚔️ So this post is aimed at people who, like me, are eyeing most of the would-be AI rule-makers with extreme suspicion and a sense that they are up to no good. Those of us who are aligned on the answers to some key questions around technology, society, and human flourishing must immediately start talking about how we can wrest control of the rule-making process from the safety-industrial complex that is already dominating it.

Read More at Jon Stokes

Read the rest at Jon Stokes