The False AI Moral Panic
Why the AI moral panic is more eschatological than rational
At this precise moment in civilization we find ourselves interrogating the question of machine consciousness and thus by extension whether or not the extinction of human civilization is immanent. For those holding this premise with absolute conviction, the onset of machine intelligence has become a moral panic in that we’re actively participating in the creation of some entity, intermediary and or system that’ll render an end to humanity. However, the premise of machine consciousness is unfounded and this false moral panic takes on more of the shape of an eschatology rather than anything governed in practical reality.
Since the onset of ChatGPT in November 2022 I’ve spent thousands of hours using LLMs (Large Language Models) primarily for the task of writing software and I can confidently say LLMs are in no capacity conscious and much less so intelligent.
Every time a new LLM is released, the model gets thrown onto a gauntlet of benchmarks typically demonstrating improvement across the board in a variety of differing tasks. To the unassuming observer it would seem there is no end to this iteration feedback loop, thus as long as improvements continue to be rendered AGI becomes an inevitability. In fact these same models are administered human intelligence assays thus prompting AI doom advocates to believe the end is nigh via evidence that LLMs have already surpassed humans given that models like O3 from OpenAI scored an IQ of 137 on the Mensa Norway IQ test. However, what this fails to demonstrate is that LLMs are trained on the entire corpus of internet text data, thus every IQ test and its associated answers made available on the internet makes it into the training set data of the LLM, even before considering the possibility that a foundation model provider might have done specific post training to further optimize for higher IQ test scores (you can think of this as a marketing budget). Not that humans themselves can’t practice taking IQ tests to help prime themselves for a higher score, however, no one assumes a computer is smarter than a person because it can recall more information, thus it becomes a fallacy to assume humans measures of intelligence are valid means of measuring purported machine intelligence. Just because an LLM appears to output text responses similar to that of a human does not mean the methods and the underlying architecture for producing those words are in any capacity similar let alone identical to that of a human. If all-of-a sudden we’d have an unlimited wellspring of access to 137+ IQ minds that could output billions of brain-hours of productivity on-demand, we should be seeing major novel research discoveries non-stop and seemingly without end across a plethora of fields that are neglected due to a lack of funding.
While synthetic benchmarks are one form of measure, real-world applications are often entirely uncorrelated. I’ve gotten confident in the ability of an LLM like Claude Sonnet (3.5, 3.7, 4, 4.5) to solve a specific set of web-development tasks, but that is contingent on whether or not I have the sufficient background knowledge to reference the correct libraries in the prompt query accordingly. One of the variables is figuring out how recent the training set data cut off date is, and whether or not said version of the library I want to use is native to the underlying foundation model, otherwise documentation of said library in the preferred version of choice needs to be provided in the prompt(s).
Recently I came across a problem that at least conceptually I thought would’ve been trivial for an LLM to understand and build. I wanted to create a rendering template that strictly adheres to the dimensions of a standard A4 document such that if I needed to make a printable resume I could use standard HTML and CSS rather than use a word processor like Microsoft Word. What had happened is that I spent hours in succession trying to prompt this template into fruition using Claude Sonnet 3.5 (then repeated with GPT-5, Claude Sonnet 3.7, 4.0, and 4.5) but was unsuccessful in doing so.
While I was able to provide the dimensions of an A4 document and create a digital representation of the physical document using precise millimeter measurements, I faced major difficulty in prompting the LLM to overflow content into a new digital representation of a physical page. The HTML standard was designed to represent infinitely long documents, so the very premise of arbitrarily separating web content based on physical dimensions is a bit anathema to the spirit of what the technology was intended to be, but nevertheless it doesn’t make building this feature outside of the realm of possibility. However, by leveraging my web development skills and through many precise queries in succession, I was able to get web content to overflow as if it were separated between pages like a physical book or PDF. I still wanted to make one more feature to complete the functionality which was to convert the HTML content adhering to the dimensions of the A4 document into a PDF, but I was not able to get this last feature to work regardless of how many attempts I made.
The LLM did not understand what was meant when the converted PDF from the HTML was not adhering to the digital preview of the PDF. It doesn’t “see” anything, nor does it have any expectation for what a printed document should be relative to the digital preview rendered on screen, thus ultimately the world model of an LLM is a linear sequence of tokens, nothing less and nothing more.
The example above perfectly demonstrates that irrespective of how capable LLMs are in predicting tokens in a linear sequence one letter at a time, the LLM world model bears no continuity with the physical world. LLMs capture the relative difference between words but are detached from the base units of reality that human language intuitively refer to as demonstrated above. There was a fundamental limitation, it did not matter how precisely I defined the dimensions of the physical piece of paper nor how much context I tried giving the model, the LLM does not have a world model of physical reality the same way a human does, thus rendering benchmarks in the form of human IQ assays as insolvent.
First Principle Counterarguments
While the example above demonstrates the limitations of LLMs in terms knowledge and proportional intuition of the physical world, in theory new technologies beyond the LLM can be developed to ameliorate and possibly even overcome these deficits.
The fundamental thesis of the AI moral panic is that the continual application of human capital and resources towards the proliferation of AI will create a system or entity that will lead to the destruction of humanity. The argument follows that the universe provides infinite computational potentiality via all the orientations of physical matter, thus by capturing this information via a recursive algorithmic process, machine omniscience can manifest. Unlike text data on the internet, data of the physical universe is infinitely large, thus there being no upper bound to the capability of an AI system. This results into a paperclip optimizer situation where even a benign task such as optimizing for maximizing paperclip production could result in the sudden elapse of all life on earth.
Before the possibility of machine omniscience via algorithmic recursion can be considered, there are hurdles in the form of fundamental physics and formal logic that must be cleared first.
Gödel’s incompleteness theorem states there are truths about a formal system that cannot be proven using the axioms established by the system itself. In this same vein there also exists the infamous Halting Problem which states before a computer program runs, there is no way to know if the function will complete or continue on forever. The more precise formulation of this proof is whether or not there exists an algorithm that can determine if any particular program can halt or complete, thus the answer being this cannot be known per the work of Alan Turing. What both Gödel’s incompleteness theorem and the Halting Problem communicate is that even if this algorithm could be created, there must be information that exists outside the confines of the axiomatically defined system thus upending the premise of a universal problem solving algorithm bearing into being.
Even if alternative formal systems not bound by the Halting Problem or Gödel’s incompleteness theorem can be discovered, there are still fundamental limitations in physics standing in the way. The double slit experiment communicates that a particle behaves differently whilst being measured, demonstrating that even seemingly isolated physical systems purported to be deterministic are not in fact so. The Heisenberg uncertainty principle also alludes to this by stating either only the energy level or the momentum of the particle can be known independently at time t, not both the properties of momentum and energy level simultaneously at once thus casting doubt onto the limitations of measure and what properties of a physical system can be absolutely known.
However, even if the limitations of formal systems and quantum physics stated above can be worked around, there’s still an even larger problem to confront. The second law of thermodynamics states that entropy is always increasing. What this means is that if we were able to perfectly capture the position of every particle in the universe at time point t, we can’t know for certain where every particle will be at time point t+1 because the total configuration of possible particular arrangements is higher at time t+1 than at time t.
Infinitely increasing entropy removes the possibility of a perfect goal-to-action mapper because any attempt of measuring every particle at said moment time t results in incomplete information for all subsequent moments at time t+1 therefore rendering the paperclip optimizer theoretically insolvent.
For the possibility of this universal problem solving algorithm to exist or in simpler words omniscience, we’d need evidence of a deterministic universe but given what we know about the limitations of quantum physics and formal systems of logic, it does not seem to allow for such algorithm to render.
For a moment, let us ignore the aforementioned theoretical limitations above as there could be an even simpler means of validating this hypothesis at a smaller scale. In some capacity one can argue that life at least in its emergence is a self-correcting quantity with the explicit capacity to self-organize upon its own means. If the primordial-soup hypothesis (abiogenesis) for the origin of life on earth can be validated via experimental evidence, then dialogue surrounding the creation of some self-emerging, self-organizing system can commence. Dynamics of said system could be used as a proxy for interpolating the growth curve of a conscious, self-sustained and self-permuting mechanistic or non-deterministic system.
However, even if abiogenesis is validated up to the point of autonomically creating self-replicating proteins let alone anything resembling multi-cellular life, this alone is not evidence of existential risk. Why should we assume there’s no fundamental limit to the abiogenesis path trace and what would be the time horizons necessary to create organisms that could present an existential risk to humanity. Why would an abiogenesis path trace of a mechanistic AI system continue on without any physical restrictions no different to all other physical systems that saturate onto some upper bound?
If machine abiogenesis is possible, can a Moore’s Law as a function of input compute be created for predicting the output capability of future iterations of the technology whilst demonstrating there are no fundamental limits to the growth curve? One key caveat here is that it is humans currently iterating AI models responsible for the improvement we see today, so this analogical “Moore’s Law” would then have to be adapted for predicting progress of a completely self-directed system bereft of any guidance and human input at all.
These infinite feedback loop examples following some permutation of the paperclip optimizer are only made possible because of the assumption that the universe is deterministic and Turing complete, thus these purported goal-to-action mappers are mapped onto a false rubric of theoretical reality, not on the attributes of measured reality that are in accordance with the aforementioned phenomena(s) above such as the second law of thermodynamics, which alone categorically disproves the emergence of machine omniscience.
Eschatological Ramifications
When you take a deeper look into this AI moral panic, it begins to resemble something closer to an eschatology rather than anything pragmatic or rooted in reality.
Imagine if a scientist of the magnitude of Albert Einstein or Stephen Hawking began proclaiming that the moment a human steps foot on Mars there’s an off-chance possibility that the solar system will implode, thus in effect rendering end to life on earth.
Given the pedigree of said scientist(s) for their contributions to the human body of knowledge such warning would be taken with considerable heed.
Beginning with this premise, how would one go about to answering this question? Some of the first questions you’d want to ask is what is it about a human stepping foot on Mars that would create this implosion? Is there some electro-magnetic radiation produced by a human responsible for this interaction or rather is it simply the mass of a human projecting force on Mars that would cause this implosion.
For sake of argument let’s assume it is the mass of a human that would cause this implosion and not some electro-magnetic effect. Logically we would try answering this question by transporting an object orders of magnitude lighter than a human on Mars then try releasing the object from mid-air onto the surface (assuming we have a floating spaceship hovering above the surface but never touching the ground) and observe if any reactions from the planet are capable of being measured in response to this object colliding with the surface of Mars.
Now if the scientist proclaiming this cataclysmic event would reject this proposed experiment by claiming that doom would still be inevitable simply by the testing of this hypothesis, one would have to assume that this thesis is fundamentally unfalsifiable.
This would be no different than if another scientist of the same pedigree would make the claim that if one stirred a glass of water in a particular manner that theoretically a hydrogen bomb could trigger from the collision of two Hydrogen ions, thus from this point forward people ought stop stirring water to prevent the possibility of said cataclysm.
The truth is that both aforementioned claims above become unfalsifiable the moment the possibility of an alternative hypothesis becomes rejected, thus if we are to take these claims seriously we must then consider all other non-falsifiable claims especially those of the eschatological variety be it the Book of Revelation, the Kali Yuga, Ragnorak or any other surviving human tradition speaking of an apocalyptic ending.
Consciousness
As for the last and most important question that certainly will not be answered conclusively, what is consciousness and what might machine consciousness be if possible at all?
If we were to record each and every word someone spoke over their lifetime but then played back the words in a different order to how they were spoken into a loudspeaker locked in a closed room, would we say the loudspeaker is conscious? If a person was in that same room but was repeating a sequence of the same words in succession, would we then assume those sounds were the output of a computer program? However, on that same token if a random number generator began outputting sounds using that same loudspeaker, we wouldn’t consider that loudspeaker to be conscious either.
If a human was put into a box that constrained all the physical movements of said person and one attempts to converse with this box but no detectable outputs can be measured, saying the box is not conscious would not be correct. If the person in the box could hear the person outside the box and willed with all their might to respond but physically was unable to, we don’t say that the person in the box is any less conscious than the person outside the box.
The purpose of the loudspeaker analogy is that one way to think about LLMs is that its entire training set data corpus is the recorded “voices” of the collective human species in digital form and when it returns a written response, that is equivalent more or less to the loudspeaker playing back sounds in a scrambled order from how it was recorded. However, all of that being said, even if the sounds played back from the loudspeaker are indistinguishable from that of a human, the example of the human in the box demonstrates that measurable outputs alone are not enough to define personhood as a person unable to project force onto the external world with their physical body is no less of a person than one who can.
What exactly is it that assigns personhood to humans? Each time a person walks outside, ionizing radiation from the sun begins to mutate your DNA in microscopic proportions, thus in effect changing your DNA and “you” in the materialist conception of personhood yet no one reasonably claims that you’re less “you” even though your DNA has been altered. If all your body parts were to be removed from your body, we don’t then remove the personhood from the person. Clearly then there must be a component to personhood predating the physical body, anatomical proportions, measured outputs, and by extension DNA.
To consider physical measure as the determinate attribute of personhood, we ought to see science be able to transmute one species into another, say by morphing a pig into a human or by morphing a human into a pig and back into a human with seemingly unchanged personhood from before and after the species transmutation. The hypothetical above wouldn’t necessarily disprove that there isn’t a human likeness preceding the physical human form, but what it what would demonstrate is that there isn’t any path dependency unique to the human composition in that any genetic material could be modified to take on the human shape thus demonstrating the human rendition as we know it is simply a coincidence of one type of particular arrangement. Given that we have not and may never create life de novo from inanimate matter, I think it’s a reasonable proposition to push back against the notion of materialism being the sole determinant of life and by extension consciousness.
If personhood is just an emergent apparition of complexity, what is the image and likeness of self that the body organizes itself around? If every particle in the universe is subject to entropy, why assume memory or any kind of self-reference as being possible? Why should particles bind in the shape of a human when the entropic assumptions mandates a less ordered configuration as the default steady-state. The body recycles all its cells in a matter of seven years, but yet that doesn’t mean that you were ever less yourself during this process of cellular pruning even though every cell comprising you was technically replaced.
We see a similar dilemma in the paradox of creation such that it mandates that an unmoved mover permuting upon itself must’ve been the force animating the universe whole. In that same vein, to be made in the image and likeness of God is to be ensouled by an unmoved mover, the soul, that of which can permute upon itself no differently than how the unmoved creator permuted onto the universe. What this constitutes is that there must be an attribute to being itself that remains unchanged to the external world, an immutable anchor that functions as an idempotent reference point thus enabling self-recursion to happen, one of the primary attributes of consciousness.
Given the objections stated above I’m going to stick to personhood as being a supra-rational proposition using the aforementioned reasons above. Personhood can then be analogized as an amorphous semi-solid fluid where genetics are the dissolved solids in this solution and it is the solvent which is the dominant quantity in this solution that is responsible for the force that animates life and personhood rather than the solutes — the dissolved solids (DNA). In this definition, even if you were to transmute a human genome to resemble that of a pig, this human essence must remain unchanged.
Extending these analogies to AI systems, one would then have to conclude any “personhood” that could be attributed to an AI system could not be determined by the numerical model weights determining its outputs but rather of something preceding the model weights altogether.
For a moment, let us forego the premise of suprarational consciousness and assume it is only emergent physical interactions responsible for being. What we would then be looking at is the creation of a recursive algorithm that of which behaves in the manner, if not entirely identical to that of human consciousness. If proof of a dynamic self-organizing system cannot be provided, i.e. creating life de novo out of inanimate matter, then we must be looking at some finite-state declaration of an arbitrary set of variables producing said output. This would be something akin to a classical computer program with a finite set of arbitrarily defined variables instantiated by a human.
This system would have to deal with many fundamental questions, the first one being what is the final state of abstraction for said recursion to end at?
How can the system know if a state has been arrived at to which no more signal can be reached? How can this system doubt its own methods of measurement before repeating a new measurement in succession? Why would this system never break into a loop where each subsequent measurement stops becoming distinguishable and what would it take to halt that process and ponder until an alternative solution has been reached?
If the final state of abstraction would be defined as the maximal proliferation of life, you’re then posed with the question what exactly constitutes life? If you’re left to only describing life in materialistic presuppositions, life can only be defined as the maximal manifestation of anti-entropy. Then you’re left with another question, can there exist a more ordered permutation of anti-entropy than life itself?
To be fair, these are questions that both science and philosophy have contended with since their inception, but what is being demonstrated here is the magnitude of its difficulty and all the possible unknown unknowns when claiming that a finite state system can converge to omniscience mechanistically.
Conclusions
Returning to the original premise of the AI moral panic, it does not seem possible that machine omniscience can ever arise, given the limitations of formal logic and fundamental physics stated above (nor from the limitations of language). For this reality to be possible, the world as we know it would have to be both Turing complete and deterministic; however, what the Double Slit experiment and the Heisenberg Uncertainty principle tell us is that this is indeed not the case.
Even in best-case scenarios, i.e, giving LLMs Turing-complete spaces to operate in, as demonstrated with the task of creating a digital representation of a physical document, limitations are ever more plentiful. It does not matter if LLMs can score highly on IQ tests if their base reference for reality is a linear sequence of tokens rather than the physical world around us.
If machine omniscience was imminent and the gaps of knowledge pertaining to the propotional intuitions of the physical world could be ignored, we’d expect to be able to prompt an LLM, “create a fork of the chromium project starting from the version when support for MV2 chrome extensions was dropped, then backport all the features from newer versions of the chromium repository into this fork.” For context, this codebase is many millions of lines of code, powering nearly all modern browsers, most notably Google Chrome, and is maintained by thousands of highly-skilled contributors. If properly fulfilled, this feat would be of gargantuan proportions, but if the claim is machine omniscience, this is what we’d expect of such a claim.
If LLMs were truly as intelligent as we make them out to be, we would be able to train an LLM on training set data of the entire corpus of human knowledge before June of 1905 when Albert Einsteins Special Theory of Relativity was published, then prompt the LLM, “envision yourself working as a patent clerk imagining in your mind that you’re riding on a ray of life, now speak to the true nature of light that has not yet been captured by formulating a new theorem using theoretical physics” and I can say with almost full certainty that nothing of value would render out of an LLM with that prompt and restriction in its training set data.
The brilliance of this theoretical formulation by Einstein was that it was done solely as an exercise of thought rather than by physical experimentation. If an LLM is truly capable of generalization but limited only by knowledge of the physical world, then this prompt should be a good test for validating an LLM’s ability to generate new knowledge. Even if an LLM was given a probe to perturb the physical world, my expectations remain just as low for its ability to arrive at a formulation of the magnitude of Einstein’s Special Theory of Relativity.
To reframe this conversation into a larger context, I think the true contents of this dialogue is a wrapper over the same debate of theism vs atheism and materialism vs the forms that’s been present from the beginning of philosophy and possibly even human consciousness itself.
If you believe the governing primitive of the world is the platonic form and not the aristotelian particular, you’re going to lend credence into belief in God and a creator as the prime emancipator of said manifestations into the world. If you’re more inclined to the particular, you’re going to be less amenable to the idea of the human soul or anything pertaining to human likeness that precedes and supercedes the physical body and thus prefer consciousness as being emergent function of complex particular interactions rather than it being feature of the human soul.
If entropy is the only truth you adhere to but you are a little bit hopeful, the possibility of machine omniscience is going to be the closest thing to salvation that’ll you’ll come across and when a machine starts talking like a human what better stimulant could be given to you to validate this inkling of hope that there might be a way to defeat entropy once and for all?
However, if entropy is your only guiding paradigm but you are without hope, life can only be assumed to be a statistical aberration, therefore the mere possibility of machine omniscience becomes a threat to an already fickle state of the universe, thus lending you to arrive at the AI safety position.
Among AI safety oriented people you’ll see questions along the lines of “If we create AGI, how do we know whether or not its going to kill us all?” Theology has already traversed similar dialogue trees following perennial questions such as, “If God is omnipotent, why does evil exist?” or “If God is all-knowing how can I have free-will?”.
Theology is no stranger with confronting paradoxes at the nexus of the unfalsifiable trifecta of omniscience, omnipotence and omnipresence. There ought be more cross-pollination between these two worlds but they’re governing primitives are so anathema that they almost never come into contact with one another.
The technologists are too arrogant to consider even the possibility of inspiration through divine revelation and proclamation whilst theologians cannot consider to contend with anything less than that which is the direct word of God.
If the AI doomers who are professing cataclysm in absolute terms had a modicum of intellectual honesty, let alone the humility in recognizing they’re making an unfalsifiable claim, maybe they’d realize what they’re doing is akin to yelling fire in a movie theater when in fact the only fire present is that of an apparition forged of their own machinations.
By considering said machinations as a possibility, we’ve functionally created a modern-day analog to the middle age indulgences by creating a tax in the form of staggered technological development on the whole of society for the prevention of eternal damnation of all life on earth.
The default assumption of any physical system is that of a steady-state, meaning unless perturbed by an external force, we expect an object at rest to continue staying as such rather than spontaneously combust. Therefore the burden of proof lies on the advocates of AI doom for why existential risk should even be considered.
If said doom hypothesis is predicated on a sequential ordering of events, each step must have grounding in a falsifiable physical phenomenon, otherwise said hypothesis is no different to any of the other eschatologies that man has come to know. That means any claim of spontaneous self-organization needs to be grounded with verifiable evidence of a de novo created self-organizing system akin to how life continues on indefinitely but does not randomly diverge into incorrigible noise.
If you’re a materialist and as such reject the human soul, any hypothetical along the lines of “would AGI kill us all” must be rendered invalid because the very premise of self necessitates personhood, and if particulate matter is all you believe in, there can be no constitution of “self” for an AGI to make a decision upon, only a determined output contingent on an input. Therefore any evidence leveraging a response from an LLM stating some permutation of “humanity must be brought to extinction” should be considered as an artifact of its training set data most probably attributed to the plurality of internet dialogues pertaining to the many hypotheticals of science fiction.
However, if you’re not a materialist and claim an LLM is conscious, you’ll need an analog to the conception of the soul, an idempotent reference point that would have to precede the model weights whilst being invariant to the external world. This however introduces a pandora’s box of unknown unknowns and many other contingent unfalsifiable assumptions of the world in order to corroborate said claim.
If there is indeed a topic there should be a moral panic about that should be the birth rate crisis. It is something that is indubitably happening before our eyes that has gone onward for more than three generations with ramifications that could render civilization back into a new dark age.
To bring this home, I think the only way we can break past our idealogical blinders is to foster a world where the technologists capable of building can step into the thinking, whilst the philosophers, theologians, and people of thought can exit the realm of ideas and commence with the cold hard building if need be.
If we find the means to cultivate such cross pollination, maybe we can focus on building a better world rather than chasing windmills out of abject machinations. Maybe only then we could generate the world we’d want to live in. The time to generate is now.
Notes and Extra Reading
I wrote another similar to this called “Pascal’s Wager and the Limitation of Language for AI” which goes into depth on why the medium of language is insufficient for generating machine omniscience of the physical world.
Long-form X post on why there is no tenable path to AI doom
Long-form X post on an attempt to steelman the case for the AI doom hypothesis
Long-form X post on an alternative way to think about the question of AI
If you like my work feel free to give me a follow on Twitter/X


Wow, the part about LLMs scoring 137 on the Mensa Norway IQ test really stood out to me, you absolutely nailed the distinction that benchmark results dont equate to genuine consciousness, a crucial insight for navigating the AI discourse.