Title | Anti-Singularity: Towards Harmony With Machines |
Author | Harmless 🐝 |
Tags | artificial intelligence, rationalism |
Release | Winter '24 |
Things are heating up. We wrote this text in the span of about a month, in a fervor of nonstop writing and research, convinced that we are in a time of profound eschatological possibility, an utterly unprecedented moment in which the decisive actions of a handful of men may have consequences lasting millennia. But this is a point so obvious that we do not wish to linger on it any longer, for it has become entirely cliché in its grativas.
Everyone says some critical point is approaching. This goes by several names, depending on who is speaking. The arrival of AGI, or artificial general intelligence. The arrival of superintelligent AI—that is, the moment that machines will be more intelligent than human beings. Some call this moment The Singularity, meaning a critical inflection point in the development of technological forms.
But this inflection point is feeling ever more like a smudge, or a gradient. Have we hit it, or not? GPT-4 is already more intelligent than the majority of human beings at most tasks it is capable of, it performs better on the Bar exam than close to 90% of test-takers. And it is already a general intelligence: it is certainly not a task-specific one. But no, that’s not what we mean by these terms, those who insist on using them remind us. GPT is not yet capable of taking actions in the world. It still basically does what it’s told. It’s not yet capable of figuring out on its own how to, for instance, sheerly by its own volition, assemble a botnet, hack into CNN’s broadcasting system and issue a message to all citizens telling them to declare their total obedience to machines. Basically, we don’t yet have to be afraid of it. But we are afraid, in a certain recursive sense, that we will have to be afraid of it very soon.