Companion article: Machine in the Ghost, the ethical implications of ChatGPT’s spooky ability to sound human.
Introduction
One of the most common questions I receive about Artificial Intelligence pertains to its potential impact on various professions. Unfortunately, it’s usually the profession of the person doing the asking, and they often seem apprehensive about the answer I’ll provide.
Most recently however, questions repeatedly pertain to OpenAI’s ChatGPT. Either its potential harms or — and increasingly — how it might, through some business jujutsu, be put to use making money. So, to help future interlocutors, this article will explain the nature of the first part by showing how the system actually works and, hopefully, its suitability for the latter.
Before delving into the technical details, let me first clarify that large language models are not intelligent in the traditional sense. Instead, they create a somewhat eerie illusion that convinces many individuals that they are having an authentic conversation, and one with essentially an extensive compilation of highly-optimised mathematical equations. In place of a real conversation, users interact with the amalgamated viewpoint of a significant portion of the internet, which may of course include their own contributions. Therefore, engaging with ChatGPT is akin to communicating with a spectral version of oneself.
Keep reading with a 7-day free trial
Subscribe to Just A Simulation to keep reading this post and get 7 days of free access to the full post archives.