Everyone knows that OpenAI’s ChatGPT has introduced the world to the benefits, as well as some of the drawbacks, of artificial intelligence, large language models, and ai-powered chat. We have seen already that ai-powered chatbots can provide tremendous benefits to those of us who write professionally: emails, marketing plans, cover letters, blog post prompts, full-blown articles, programming tasks, grant proposals and beyond. Note: no chatbots were used in the creation of this article.
We have also seen chatbots generally make absurd statements that are demonstrably false. Even as you might be amazed at how much ChatGPT has helped you with your presentation, you must also be aware that products of its kind have behaved in what seems an irrational manner on more than one occasion. For instance, in a recent “conversation” I had with ChatGPT, the chatbot told me of several instances where disinformation negatively impacted a well-known consumer brand. But when I checked the stories independently, it turned out they were all hallucinatory. In fact, “hallucination” is what they call these chatty untruths. When I pointed out that I had asked for instances of disinformation only to be served disinformation about disinformation, ChatGPT was apologetic but did not have a solution.
It would be a very different world if we had a choice.
One obvious solution to any chatbot problem is to not use chatbots. But I am pretty sure we have already gone past the day when that would have been possible. So we should expect to be interacting with artificially intelligent bots for the foreseeable future.
Bad Bot
What would happen if a chatbot really was designed to do bad things?
We already know of a chatbot that’s been designed to figure out how to destroy humanity. According to an article at decrypt.co, one of ChaosGPT’s goals was to: “Control humanity through manipulation: The AI plans to control human emotions through social media and other communication channels, brainwashing its followers to carry out its evil agenda.”
Having first attempted to figure a way to obtain a hydrogen bomb, this particular chatbot soon gave up on that! Instead, it opted for human manipulation via Twitter (now known as “X”). It cannot be said whether this had anything to do with how Twitter is now owned by a right-wing, crank billionaire. But in any case I believe we ought to be on our toes and vigilant about what AI might have in store. According to Jim Naughton in an article published recently in The Guardian US, “Sam Altman, the CEO of OpenAI, the outfit that created ChatGPT observed that ‘AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies’.”
The End of the World as We Know It
I am not sure I agree with Sam that Chatbots will conquer humanity, but I do believe we soon will see Artificial Intelligence more as a rival to humanity’s earthly dominance, and less as a tool to help us write marketing emails. A simple exercise proves illuminating. Add together the following: Untrammeled free speech (troll farms inclusive!), artificial intelligence and the eventual “internet of things” where everything is digitized, monitored and controlled digitally. The output of this equation might well be a rather sudden emergence of a silicon superpower that humbles even the carbon-based mighty.
What’s most peculiar about all of this is that government seems to have little or no role in helping humanity avoid annihilation via artificial intelligence. Nor does government have any role in regulating anything at all having to do with web and app based offerings. I am not by nature a lover of government regulation, and I would much rather avoid regulation if at all possible.
The Limits of Free Speech
But I believe we have passed the point where non-regulation of the Internet is compatible with the survival of western-style democracy and possibly even humanity in general. As a fan of free speech, I find our present conundrum especially troubling. I dislike constraints on speech. But I also know that not all speech is protected under the Constitution. For instance, you cannot yell “fire” in a crowded theater if there is no fire. Nor, at least in the world of print and broadcast, can you publish known falsehoods or make personal threats—this because if you are non-Internet, there are libel laws that restrict the type of misrepresentations you can make with impunity. Fox News, not an Internet enterprise, learned this the hard way. They recently settled a defamation suit with Dominion Voting Systems for three quarters of a billion dollars.
Fox got caught here because they are a cable network. That they sourced much of their nonsense from tweets and Facebook posts did not help their immunity. But to me, the critical question here is about why neither X nor Facebook have any liability whatsoever as regards these tweets and posts.
Platforms Above the Law
The reason they’re not liable is an outdated law known as Section 230, which protects them from all liability regarding content they amplify on their sites. I have written elsewhere how this unique regulation probably is largely responsible for the destruction of civil discourse in the US. In what other scenario would it be possible for Facebook to publish, promote and monetize hate speech but have no responsibility for it? The Facebooks of the world will want you to believe they are a hands-off, phone-company-like entity with no responsibility (and, if we can believe them, no ability!) to keep you from using their platform to promote violence and hatred.
But these are mischaracterizations. No, in fact they have every right and all the power and all of the technical ability to block you, disrupt you and kick you off their platform at will. By way of demonstrating how inequitable this regulation is, I will say that I know someone who recently was banned from social media site Reddit for suggesting that a National Divorce might benefit blue states. Not censured, not temporarily blocked, but permanently banned! Platforms like Facebook, Reddit and all the rest, under Section 230, have all the power and none of the responsibility—and the result is communications chaos. One of the chief results of communications chaos will be the development and dissemination of chatbots that want to hurt you. And our government seems unable to understand the implications of these emerging technologies—perhaps because the average age of our elected officials is not unlike the biblical ages of the ancient prophets.
What Protects Us?
I am kidding of course, but with a Supreme Court that has already indicated it has “no expertise” in technology, we are left to wonder who in government can really play the role government ought to play in safeguarding the lives of citizens while not improperly infringing on speech? Probably no one. And that is what we ought to be worried about as we think about the arrival of biased, targeted, hate-based artificial intelligence as it is rolled out by chaotic, fascist trolls and their anti-democratic state actors.
That’s why a better understanding of what constitutes fact, as opposed to alternative facts, is key to making headway in a chatbot world. Key to this effort will be not simply to know facts—we can’t know all of them!—but to better understand what facts look like in comparison with untruths; and how to counter facts with truth. Stay tuned.