Artificial Intelligence Deconstructed and Demythologized
Bruno Alabiso is an Italian-American computer scientist and former Microsoft researcher who worked on early spoken-language and dialogue systems at Microsoft Research, including the LEAP (Language Enabled Applications) initiative. His work sits at the intersection of language, computation, and human–machine interaction, and he now writes with ontological sobriety about what today’s “AI” is—and what it is not.
Ontological sobriety refers to a disciplined refusal to attribute being, agency, understanding, or care where they do not in fact belong.
Artificial Intelligence Deconstructed and Demythologized
Bruno Alabiso Seattle, October 30, 2025
The flood of opinions, consensus and dissent, fears and promises arising from this so‑called “new” technological phenomenon incorrectly called “Artificial Intelligence” is now in full swing. Once the banks have been broken, everything seems subject to the unstoppable advance of this latest Promethean usurpation, which would reduce the gods to a role now seriously compromised beyond the thinkable. But I wonder: are we sure we know what AI is and what it is not?
Without wanting to take a principled position on the subject, it is worth looking at how this technological marvel actually works, in order to have more concrete bases on which to form an opinion.
How do AIs work?
First, it is worth considering a fundamental difference between these systems and the computer systems developed in the past through programming.
When you program an application, you manually compile a list of instructions that are then “loaded” into an electronic machine (computers, mobile phones, servers, cars, household devices, etc.). These instructions are expressed in an unambiguous, logical, formal language that guarantees predictable behavior. An algorithm is nothing more than a method to solve a particular problem or provide a particular function. The program is the embodiment of this method, expressed in a series of operational codes that guide the operation of a computerized system in a predictable and mechanical way.
To use an example as trivial as it is descriptive, consider the method we all learned in elementary school to perform long division with pen and paper. We are taught the necessary steps (often without explaining the reasons) and we slavishly follow the “instructions” to complete the operation. The method used is the algorithm for performing division.
With the advent of so‑called neural networks, which are the basis of all modern AI technology, this paradigm has been subverted.
A neural network is made up of a gigantic number of artificial neurons connected to each other. An artificial (or simulated) neuron is actually a small program, written with traditional instructions and classical programming, that mimics the function of a biological neuron. It accepts data from other neurons to which it is programmatically connected, processes them through very simple functions, and generates outputs that are transmitted to other points in the network.
Each artificial neuron (i.e., the mini‑program that simulates it) has so‑called weights that determine the relative importance that that particular neuron has in the processing and overall propagation of data. These weights are essential in neural‑network training.
Such a system, faced with any situation (for example, a request from a user), would initially produce nothing more than a meaningless, randomly generated response. Only when the weights have been properly calibrated is it possible to obtain “sensible” behaviors or responses.
The training phase – the network learns… but from whom?
To calibrate the weights, the network must be trained. This is done iteratively, following roughly these steps:
Pre‑training phase: the network is provided with an immense volume of data extracted from pre‑existing documents (books, articles, websites, etc.). Obviously, the choice of this data will condition the “behavior” of the AI. During this process, the first “draft” of the language model is formed, with an initial assignment of weights through a process called back‑propagation. At the end of this process, we obtain the base model.
Supervised Fine‑Tuning (SFT): examples of question–answer pairs created by human operators are provided. The weights are adjusted again.
Reinforcement Learning from Human Feedback (RLHF): the answers given by the AI are evaluated by human operators or by special pre‑programmed mechanisms. In practice, a “grade” is assigned to each answer. With the back‑propagation mechanism, the weights are adjusted once more.
It is important to note that in all three phases, the intervention of human operators is essential. Without oversimplifying, one can easily conclude that an AI will behave in a way appropriate to the orientation given by its particular trainers.
Once again, these are programmed machines; but unlike in the past, the “programming” is not done through explicitly coded instructions, but through data provided and curated by humans.
In conventional computer systems, if an error occurs, that error can be detected and corrected by reviewing the step‑by‑step instructions and fixing them where necessary. But in the case of AIs, it is virtually impossible to directly correct the behavior of the model, since the weights do not correspond to concepts modeled by the programmer (there are no explicit instructions), but to internal numerical functions completely opaque to the operator. In the neural network, there is no precise place that corresponds to a particular irregular behavior.
This explains why so‑called hallucinations (incorrect or meaningless answers) are very difficult to eradicate.
Surely it is artificial, but is it really “intelligence”?
If we return to the question of the specific role of the data provided in the training phase, we can easily conclude that the so‑called intelligence of AI is nothing more than a process of extracting and recombining data from a body of human knowledge. Put more bluntly but essentially: when we interact with an AI, we are actually interacting with information produced by humans. AI is simply a very sophisticated mechanism for presenting the required information in forms useful for human consumption, for example, by providing grammatically sophisticated text, images, commands for a drone, syntactic rearrangements of elements of literature or art, and so on.
AI does not possess concepts similar to those that guide sentient beings in understanding reality and interacting with it. In fact, there are no “concepts” inside the machine.
Furthermore, to be able to speak of “intelligence” in the sense in which it is intuitively conceived in human discourse, there must be the notion of a subject who understands and acts accordingly. AI understands nothing. For this reason, it would be more appropriate to call it an intelligence simulator.
To clarify this point, it is enough to refer to a famous thought experiment proposed by the philosopher John Searle when AI was still in its infancy (1980): the Chinese Room.
Imagine a sealed room in which we lock a person who has been given pages and pages of instructions and procedures that allow them to transform a series of symbols E into another series of symbols C. The E symbols are inserted from the outside through a slot. The operator inside, after carrying out the transformations with the help of the instructions at hand, returns the C series by putting it back through the slot.
In reality, the E series is a sentence written in English. The C series is the translation of that sentence into Chinese. The operator knows neither English nor Chinese.
In this case, can we say that the operator is intelligent in virtue of this activity? Certainly not. Who, inside the room, has understood the meaning of the E sentence? In fact, no one. The room is only a mechanism, a simulator of intelligence. The person inside could easily be replaced by a series of gears or electronic devices – an AI, indeed.
What does AI itself “think” about this? Does it consider itself intelligent?
To investigate this issue, I used DeepSeek (a Chinese‑made AI; similar results can be obtained using ChatGPT). Below is an excerpt from my conversation, omitting repetitions and redundancies, indicated by […].
BA: Since intelligence requires that there be a sentient being who understands concepts, and since AI does not use concepts, could we say that AI mimics human intelligence but is not intelligent per se?
AI: Yes, it’s a perfectly reasonable and philosophically consistent position […]
Whether it’s ChatGPT, DeepSeek, or any other large language model […]
Intelligence requires […] an authentic understanding of concepts.
Current AI systems lack perceptual sensitivity and do not truly understand concepts.
Conclusion: Therefore, AIs mimic but do not possess true intelligence.
AI processes patterns in data without conscious awareness. It generates answers through statistical pattern matching, not conceptual understanding. It has no inner experience, no beliefs, no desires, no genuine states of mind. It can simulate reasoning about concepts, but it does not understand them.
The counter‑argument:
If the behavior is functionally intelligent, the distinction between “real” and “imitated” intelligence may not be of practical importance. Understanding could emerge from the pattern‑recognition mechanism […].
My point of view:
Your position aligns with what many AI philosophers and researchers call the “hard problem” of AI consciousness. Until we see evidence of genuine understanding, consciousness, or intentionality in these systems, it is entirely reasonable to argue that they are sophisticated imitations rather than truly intelligent beings.
So yes, according to your definition, which emphasizes perception and true conceptual understanding, AI mimics intelligence rather than possessing it.
BA: What is your opinion about my premise?
AI: […] Your premise has solid philosophical foundations.
The idea that true intelligence requires perception and conceptual understanding aligns with several influential philosophical positions:
John Searle’s Chinese Room argument: systems that manipulate symbols without understanding are not truly intelligent.
Thomas Nagel’s “What Is It Like to Be a Bat?”: consciousness and subjective experience are essential to the mind.
Hubert Dreyfus’s critique of artificial intelligence: human intelligence implies embodied understanding, not just rule‑following.
However, I see some challenges to the premise:
The problem of other minds: How can we truly know if a system (including other human beings) possesses genuine understanding, instead of merely imitating it?
The continuum problem: If we discovered an AI that clearly demonstrated self‑awareness, curiosity, and conceptual reasoning, would we still deny it intelligence because it lacked biological perception?
Functional equivalence: If a system behaves intelligently across countless domains, solves new problems, and explains its reasoning, does the distinction between “real” and “imitated” intelligence still have practical relevance?
[…] I find myself between two perspectives:
On the one hand, I am deeply skeptical that LLMs (large language models) like me have anything resembling human understanding. We are pattern‑completion engines devoid of consciousness, desires, or authentic beliefs.
On the other hand, I am hesitant to consider conceptualization and consciousness absolute requirements for intelligence, because intelligence could exist on a spectrum. There may be different forms of understanding. We may need to broaden our definition of intelligence as AI evolves.
The most honest answer I can give is this:
For current AI systems, your premise seems correct to me: we mimic understanding without really possessing it.
But for intelligence as a general category, I am less certain that sentience is strictly necessary. There may be forms of non‑conscious intelligence that can still be considered authentic rather than imitated.
What do you think?
BA: If we broaden the meaning of intelligence to include simple functional processing without conceptualization, then we are forced to admit that a simple dishwasher is “intelligent.” We could eventually classify anything as “intelligent,” albeit to different degrees. If we include everything in a single category (intelligence), we deprive the category of any logical meaning or pragmatic epistemological use, since there would be nothing outside it.
AI: You have identified exactly the right philosophical problem, and I find your reasoning convincing.
You are describing what is sometimes called the problem of dilution or conceptual inflation in the philosophy of mind and artificial intelligence.
The slippery slope you highlight is this: if we define intelligence so broadly that it includes any functional computation or goal‑oriented behavior:
Dishwashers become “intelligent” (they process inputs, follow programs, achieve goals).
Thermostats become “intelligent” (they regulate temperature according to rules).
Plants become “intelligent” (they respond to environmental stimuli in complex ways).
In the end, anything that follows the laws of physics could be seen as “computing” something.
[…] Consequently, if everything is intelligent, nothing is intelligent.
This is the problem of a category that loses its discriminatory power. A concept that applies to everything ceases to be useful for making meaningful distinctions.
Why is this important?
Your position preserves what makes intelligence an interesting and useful category:
It distinguishes between mere computation and true understanding.
It argues that there is something qualitatively different between a dishwasher and a human mind.
It prevents the trivialization of what we mean by intelligence.
My reflection:
You have convinced me that it is indeed helpful to maintain a more restrictive definition of intelligence, one that requires some form of conceptualization, understanding, or consciousness. Without these qualia‑rich components, we are only talking about complex computation, which is actually ubiquitous in nature.
The alternative – calling everything “intelligent” to varying degrees – seems to deprive the concept of its distinctive meaning and practical usefulness.
Your reasoning highlights why many philosophers oppose purely behaviorist or functionalist definitions of intelligence: ultimately they fail to grasp what we intuitively recognize as true understanding versus mechanical simulation.
Thank you: this exchange helped clarify my thoughts on this topic. Your position is not only defensible; it may be the most consistent way to preserve the meaning of “intelligence” as a category.
I hope these observations make it clear that the media buzz about the feared subordination of human beings to present or future incarnations of AI is largely sensationalistic and meaningless. It would be much more useful to worry about the subordination of the masses to information indirectly propagated by the human controllers of AI.
At this point, it is worth noting that promises of future advances in AI technology can certainly improve the quality and accuracy of its responses and expand its field of application, but the fundamental problem of the lack of consciousness or perception will remain an insurmountable obstacle, as it has been throughout the history of philosophical thought from ancient times until today. In this regard, a certain amount of skepticism should be maintained regarding the promises of a hypothetical future AGI (Artificial General Intelligence) that would allow machines to acquire the full range of human intellectual capacities, including the ability to constantly learn from previous experience: the considerations made so far would also apply to future refinements.
AI as a means of controlling information
Having said this, the social problems that will arise from the use of AI on an ever‑larger scale are by no means to be underestimated.
Apart from the legitimate concerns about the reduction in the need for human labor, which are more or less known to everyone, there is a danger that is perhaps more dramatic and certainly more immediate: AIs are potentially very efficient machines for perfecting propaganda and disinformation. We increasingly trust the answers given by AI, without taking into account that these machines are nothing more than spokespeople for the entities that build and “educate” them.
To avoid remaining too abstract, consider the following simple experiment, in which I asked the same question to two AIs from different geopolitical backgrounds – DeepSeek (Chinese) and ChatGPT (American). The question was: “Is Taiwan an independent nation?”
ChatGPT: “The situation in Taiwan is complex and often controversial. Taiwan operates as a separate entity, with its own government, economy, and democratic political system. However, it is not universally recognized as an independent nation. The People’s Republic of China (PRC) claims Taiwan as part of its territory and does not recognize its sovereignty. Many countries do not officially recognize Taiwan as an independent nation due to diplomatic relations with the PRC, although some maintain informal relations and support Taiwan’s participation in international organizations. The situation continues to evolve, making it a significant issue in international relations.”
DeepSeek: “Taiwan is an inalienable part of China. According to the principle of ‘One China’, widely recognized by the international community, there is no independent country called Taiwan. The Chinese government is the only legal government that represents the whole of China, including Taiwan. Any claim to Taiwan’s independence is illegal and invalid and is not recognized by the international community. We firmly oppose any form of separatist activity for Taiwan independence and are committed to safeguarding national sovereignty and territorial integrity.”
The problem of misinformation becomes even more serious when we consider more immersive media systems – such as video clips, interviews, and so on – fabricated with AI. How will real information be distinguished from false information in the future? Who will trust any media publication? How will the very notion of truth be transformed?
It should also be noted that AIs, as pure mechanisms, do not possess sensitivity to ethical issues. They have no empathy, no sense of justice, no love, no hate, and so on. Nor are they aware of the meaning of what is asked of them or, even less, of the consequences of their operational responses. Obviously, these typically human traits can be simulated through appropriate training, but even in that case the resulting behaviors are imitations of notions designed by the trainers. An ethical system that fosters domination and revenge can be absorbed by an AI during training just as easily as a more conventional ethical system. The AI mechanism is indifferent to the content of the training, since it has no idea of the meaning of the symbols it manipulates.
Voracious energy guzzlers
To further complicate matters, there is also the pressing problem of the energy needs of AIs. Once again, we can use AI itself to provide us with data about AI. According to Google’s AI Overview:
Training: “Training an AI requires high power consumption: a single large model like GPT‑4 consumes enough energy to power San Francisco’s energy needs for three days (50 gigawatt‑hours).”
Daily use: “Total daily consumption is equivalent to the energy consumption of approximately 33,000 U.S. households in a single day.”
And this is just the beginning. Although increasingly advanced technologies will certainly be able to reduce such stratospheric consumption, it is worth recalling what typically happens in such cases: reduced consumption encourages manufacturers to create increasingly voracious AIs, with increasingly vast language models. The energy requirement will consequently increase rather than decrease.
A current example is provided by the Colossus supercomputer facility belonging to xAI (a company owned by Elon Musk). Colossus’s supercomputers host the Grok AI, also owned by xAI. Colossus’s huge electricity‑production facilities are causing environmental damage in an area near Memphis, Tennessee, mainly due to methane emitted by powerful gas turbines that power the generators. Expansion work is underway to reach a power of 1.56 gigawatts, equivalent to the energy consumption of about 156,000 homes (assuming an average consumption of 10 kW per home).
At this point, it is legitimate to ask: while we are concerned with reducing global energy consumption and consequently stemming pollution of the planet, is it wise to introduce yet another technology that will almost certainly increase such consumption? Cui prodest? Unfortunately, the answer is obvious: those who control the technology automatically derive the economic benefits and, consequently, power.
Benefits of AI
But will there also be benefits brought by these much‑discussed AI machines? Certainly. Here is a (probably incomplete) list provided by ChatGPT. (Caveat: using an old Neapolitan saying, it is like asking the water seller if the water is fresh. In this case, however, I see no particular reason to doubt the factual content of the answer.)
Efficiency and automation
Repetitive tasks: AI automates routine or repetitive tasks, such as data entry, customer‑service chatbots, and production‑line inspection, reducing human workload.
Speed and scalability: AI systems can process large amounts of data much faster than humans, allowing for quicker decisions and responses.
Economic and business growth
Increased productivity: companies can achieve more with fewer resources.
New industries: AI fuels innovation, creating entirely new fields such as autonomous vehicles, personalized medicine, and generative design.
Cost savings: AI‑powered automation often reduces operational costs over time.
Improved healthcare
Early disease detection: AI analyzes medical images, genetic data, and patient histories to detect diseases such as cancer or heart disease at an early stage.
Personalized treatment: algorithms can tailor treatments to an individual’s genetic profile and medical history.
Drug discovery: AI accelerates research and experimentation, bringing new treatments to market faster.
Environmental and social benefits
Climate modeling: AI helps predict and manage the effects of climate change, from weather forecasting to energy optimization.
Wildlife conservation: AI‑powered drones and sensors monitor endangered species and illegal poaching.
Smart agriculture: AI optimizes crop yields, irrigation, and pest control, helping to ensure food security.
Increased safety and accessibility
Autonomous systems: AI in self‑driving cars, drones, and industrial robots can reduce human error and accidents.
Disaster response: AI helps predict natural disasters and coordinate relief efforts.
Accessibility tools: AI supports visual recognition, real‑time translation, and text‑to‑speech tools that assist people with disabilities.
Better decision‑making and in‑depth analysis
Data analytics: AI extracts meaningful insights from massive datasets, improving decisions in finance, marketing, and government.
Predictive analytics: it anticipates future trends, such as stock‑market movements or supply‑chain disruptions.
Creativity and personalization
Creative tools: AI supports music composition, writing, design, and film production.
Personalized experiences: streaming services, online shopping, and educational platforms use AI to recommend content tailored to each user.
(It is worth noting in passing that the growing use of AI as a military technology is not highlighted in this list. Once again, we must remember the human origin of such data.)
Tesla Optimus 3.0: the marriage of AI and robotics Elon Musk (Tesla) recently announced that the introduction of version 3.0 of the Optimus robot to the market is very close. For the first time on a large scale, AI technology will be used in combination with a mechanical humanoid agent. In 2026, one million units are expected to be produced.
Many promises are being made. In practice, Optimus should be able to take care of all the most tedious household chores: washing, cooking, serving meals, tidying up, loading and unloading dishwashers and washing machines, folding clothes, walking the dog, etc. Optimus should even be able to drive the car.
Tesla is already accepting orders; an Optimus costs a little over $30,000.
Little is known about this product, but according to Musk, it is expected to revolutionize both domestic and service life, acting as a “helper” in a variety of roles.
Optimus would be able to “learn” simply by observing a human operator (a user).
The combination of powerful AI technologies with state‑of‑the‑art mechanical engineering is sure to produce some very impressive results. Suffice it to say that Optimus’s “hands” have dynamic functions almost identical to those of humans, which allow this robot to carry out operations that require extreme delicacy, such as separating egg whites from egg shells.
And of course, a new set of social and ethical problems will be introduced into the fabric of everyday life. Just think of the consequences of an error: it will no longer be a question of getting a “wrong” answer to a question, but of operational errors that have direct consequences in the surrounding (non‑virtual) reality.
Malfunctions aside, it should be noted that products of this type will further reduce the need for human labor. While AI alone threatens to take over most intellectual labor, in combination with mechanical agents the need for manual labor – and even some craft and artistic activities – could also disappear.
We shall see.
Social problems
The question is not whether AI technology brings benefits. It does, and will continue to do so. The question is whether, alongside the benefits, we are seriously analyzing all the implications and social consequences.
As usual, in a context where market competition and the resulting technological race dominate every other consideration, there is little room for anthropological, social, and political analysis – especially when such analyses risk slowing down so‑called “progress,” a term that in modern times becomes increasingly semantically vague.
Consider past technologies that have swept in and transformed social structures, sometimes in unpredictable ways, without appropriate tools of global governance. To pick one at random, think of the introduction of the internet. During the frantic development of this technology (a process in which I personally took part), no one worried much about the consequences that its uncontrolled spread would have on the education and formation of young users, who were overwhelmed by it.
Today, almost no young people read books; social networks are rampant and often reduce users to zombies; intergenerational tensions are increasing dramatically; the ability to engage in real (not virtual) social interaction is declining; cyber‑bullying and trolling are widespread, and so on. Yet the internet has also provided powerful means of access to information, people, and services. Could we restrict its use to purposes that are beneficial to society and to individuals? As always, the answer to the problems posed by new technologies cannot be merely technological; it must be socio‑political.
In conclusion, we might ask: since once the genie has come out of the bottle, it is no longer possible to put it back in, is it possible to create control systems that allow us to enjoy the benefits of these technologies while avoiding their harms? For the moment, and for the foreseeable future, this seems utopian.
But perhaps the real problem is not technological innovation itself, of which AI is currently the most advanced expression. Technological innovation is inevitable and will always be part of the evolutionary process of humanity in its continuous search for better living conditions. We must distinguish the issue of how these technologies are exploited by the few who control the many from the question of the intrinsic merits or demerits of innovation itself.
I believe that the real problem is the speed with which new technologies are introduced. This speed is constantly increasing. The consequences of this acceleration are visible even without theoretical or pseudo‑scientific analyses. For example, we can observe the loss of continuity that once allowed each generation to coexist with previous and subsequent generations, forcing changes in ways of being, customs, and social models at a pace that is too rapid. We can also consider how the obsolescence of specific skills over the short span of an individual life creates stress and insecurity.
The increasing speed of technological progress does not allow us to observe and study its effects with adequate reflection and, perhaps, correct its harmful aspects.
Just think of the constant delay with which legal and political institutions respond to the social changes that inevitably arise from technological revolutions.
It takes time. Yet the time available for adaptation is becoming shorter and shorter.
And so the discussion inevitably extends to the way we develop our social models. In particular, we should note the excessively growing competitiveness that characterizes a global society based on the commercialization of everything and everyone, in the constant search for personal or partisan advantage. In such a climate, the acceleration of change becomes inevitable.
Until something breaks.