ChatGPT is not the end of the world!
Could alarmists have found a new fear by which to control society?
Yet again, the fearmongers among us warn that transformer technologies like ChatGPT could lead to societal collapse. Some even perceive it as a precursor to sentient AI that will independently make decisions and, given their belief in the inherent malice of technology, inevitably seek to control or even annihilate humanity.
Upon closer examination of ChatGPT though, one would observe that it is essentially a Large Language Model (LLM) that processes a vast corpus – a compilation of documents that spans most of the internet. This corpus is ingeniously transformed into numerical vectors by the model. The frequency of words, sentences, paragraphs, and so on within this corpus establishes a statistical framework for the model's predictions. Specifically, when generating a response, the model predicts the probability of the 'next' word based on what it has outputted up to that point. The user's prompt (or query) is also converted into a vector and serves as a constraint in determining the 'next word' in the generated response."1
In addition to the prompt itself, users can also control the response using parameters like 'temperature' which is set between zero and one. These parameters influence the randomness or diversity of the generated text, allowing users to fine-tune the level of creativity or exploration in the model's output.
However, ChatGPT, despite its impressive capabilities, does not possess any true understanding of what it is doing. While it may appear to perform 'intelligent' tasks, it remains, at its core, a 'dumb' machine – albeit an impressive one, requiring unprecedented computing power to optimize its billions of network parameters. To fear that it might one day achieve sentience, however, reflects a lack of understanding. This fear stems from the notion that by connecting a network in a specific manner, either through hardware or software, sentience or mentality will spontaneously emerge. The human brain is often cited as an example; because neurons are connected in particular ways, living organisms, especially humans, exhibit consciousness, attention, emotions, and other mental characteristics.
Nonetheless, this line of reasoning contains a significant flaw, as any philosopher might argue – the ontological gap: how can the 'physical' (the hardware configured by the software) bridge the divide to the mental realm? Although numerous theories exist, we have not yet determined the nature of mentality, nor can we define it without resorting to a circular explanation. This so called ‘hard problem’ in cognitive science remains a major barrier to artificial sentience, and AI being an analogous “left brain” process, which intially breaks down the “whole pattern” into its parts and then re-presents it, can never take on a global, holistic, “right brain” perspective if one is ever able to solve this problem.2 Iain McGilchrist in “The Matter With Things” discusses this issue in depth.
It is important to acknowledge that the technology underlying transformers and AI in general can indeed be manipulated to cause harm. Nightmarish counterfactuals are often brought up, such as the creation of deceptive virtual events passed off as 'truth,' leading to a blurred line between truth and falsehood. Other concerns include the displacement of entire job sectors, the development of AI weapons, and, on a less threatening level, a decline in human creativity as people increasingly rely on AI instead of exercising their own mental faculties. However, these potential negatives must be weighed against the considerable benefits of this technology, especially its capacity to address the aforementioned issues.
For instance, new methods and technologies using GPT can be developed to combat the deception of virtual fakes. While it is true that new technologies have historically displaced certain job categories, they have also created new types of jobs, thereby expanding job opportunities and overall wellbeing. And the fear that AI will universally replace humans is fundamentally a form of epistemological fearmongering, as discussed above. The issue of AI weapons mirrors, to some extent, that of nuclear power: nuclear technology, in the wrong hands, could devastate humanity, but we have managed to mitigate this risk over the past several decades without forgoing the benefits of clean, environmentally-friendly nuclear power.
In my field, concerns that GPT technology will infringe on privacy overlook the ways in which this technology can actually enhance privacy. For example, I can input the 7 Principles of Privacy by Design (PbD) into ChatGPT, along with an organization's current operating procedures, and prompt ChatGPT to modify those procedures to incorporate PbD. This process may require some prompt iterations to refine the response, but it also opens up a new job sector.
Furthermore, ChatGPT can anonymize any document by replacing personal information with anonymous placeholders. This process can be facilitated through a local “add-on” module that integrates with GPT, operating under the principles of Named Entity Recognition (NER) and anonymization. First, the NER process involves the local add-on scanning the text to identify named entities, such as names of individuals, locations, or organizations. This is a common task in Natural Language Processing (NLP) and there are several pre-existing models capable of performing this task, like those offered by SpaCy, Stanford NLP, or Hugging Face's Transformers.
Following the identification of these entities, the anonymization process begins. This involves replacing each identified entity with a corresponding generic placeholder. For instance, individual names may be replaced with labels such as [PERSON1], [PERSON2], etc., while locations could be replaced with [LOCATION1], [LOCATION2], and so forth.
In terms of user accessibility, this local add-on would be a distinct module that could be integrated with GPT. Users would be able to employ this module to anonymize their documents prior to processing them with GPT or any other AI model. This approach ensures that any sensitive information within the documents is not revealed to the AI model, thereby maintaining privacy.
However, it is important to note that this process, while significantly reducing the risk of disclosing personal information, is not entirely infallible. At this stage of the technology, context may still permit inference of certain personal information, and it is possible that the local NER model may miss some named entities. Therefore, to ensure complete anonymization, a manual review of the anonymized documents may still be necessary," again, another privacy job, at least in the near future!
Despite the aforementioned points, there is a scenario that warrants caution. There are individuals, both within and outside the realm of AI technology, who spread fear and apprehension. Their anxieties may stem from a belief in humanity's inherent malevolence—a viewpoint that can hardly be faulted considering the conduct exhibited by many in our society. The prospect of such individuals wielding powerful AI technology is indeed chilling.
However, even those who believe in humanity's essential goodness may exhibit a failing that could precipitate the grim predictions made about AI technology. These individuals not only trust their fellow humans, but also extend this trust to large institutions—governments and the mammoth corporations that often operate in tandem with them. This dangerous pairing now poses a genuine threat to society, and indeed to our civilization.
Let us consider this from the standpoint of technology. In the modern era, technological advancements have dramatically tipped the power balance between the governing —namely, the State— and the governed, in favor of the former. If both parties were armed only with primitive weapons like stones and spears, as was the case in bygone eras, the power dynamics would remain relatively balanced. However, when the State possesses AI, surveillance technology, and weapons of unimaginable destructive power, while the governed have been legally disarmed and left with metaphorical 'stones and spears', the latter will find themselves inevitably chained to authoritarian power. The State is now repurposing commercial technologies as tools of control—a perfect situation for the State, but a dystopian nightmare for its citizens.
This is where my concerns lie: attempts by governments to regulate AI will likely be biased. While we, the people, may be prevented from creating and using AI for our own purposes, I have doubts that governments—especially those cozy with their corporate allies—will adhere to the same restrictions. If this proves to be the case, we may indeed witness the manifestation of the feared negative consequences.
So is the problem really the technology, or is it the current political power structure that we seem to unquestionably accept?
A more detailed explanation of how prompts work:
Vectorization: The user's prompt is first converted into a numerical representation called a vector. This conversion is done using a pre-trained embedding layer that maps each word, subword, or character in the prompt to a high-dimensional numerical vector. These vectors capture semantic and syntactic information about the words, allowing the model to process and understand the input.
Context: The prompt serves as the initial context for the model's text generation. As the model generates words one at a time, it takes into account not only the information from the corpus it was trained on but also the specific information provided in the prompt. This allows the model to generate a response that is relevant to the input query.
Constraints: In some cases, the prompt may contain specific instructions or constraints that the model should follow. For example, a user might ask the model to provide information in bullet points or to give examples. The model attempts to adhere to these constraints, generating a response that matches the desired format or structure.
There may be an intersection between artificial sentience, the hard problem of consciousness, the ontological gap, and Gödel's Incompleteness Theorems in terms of the limits of formal systems, the nature of consciousness, and the possibility of creating sentient artificial intelligence.
Gödel's Incompleteness Theorems are a pair of results in mathematical logic that demonstrate the inherent limitations of formal axiomatic systems. The first theorem states that any consistent formal system (such as a mathematical or computational system) that is powerful enough to express arithmetic cannot be both complete and consistent. In other words, there will always be true statements that cannot be proven within the system. The second theorem asserts that the consistency of the system cannot be proven within the system itself.
Artificial sentience refers to the idea of creating machines or algorithms that possess consciousness, self-awareness, and the ability to think, feel, and understand like humans. The hard problem of consciousness, a term coined by philosopher David Chalmers, refers to the question of how and why certain physical processes in the brain give rise to subjective experiences or qualia (e.g., the experience of seeing red, feeling pain, etc.). The ontological gap refers to the philosophical issue of how the physical (e.g., brain processes) can give rise to the mental (conscious experiences).
The connection between these concepts and Gödel's theorems can be explored in several ways:
Limits of formal systems: Gödel's theorems imply that there are inherent limitations to formal systems, which could extend to computational models of cognition and consciousness. If the human mind or consciousness cannot be fully captured by a formal system, then it will not be possible to create truly sentient AI using our current understanding of computation and mathematical logic.
The nature of consciousness: The hard problem of consciousness and the ontological gap highlight the difficulty in understanding the nature of consciousness and how it arises from physical processes. Gödel's theorems suggest that there may be fundamental limits to our ability to fully understand or explain consciousness within a formal system, potentially leaving some aspects of consciousness forever mysterious or elusive.
Implications for AI: The intersection of these concepts raises questions about the prospects of creating artificial sentience. If the nature of consciousness is fundamentally beyond the scope of formal systems, it may be impossible to create a truly sentient AI using current computational approaches.
Dear Authors Dr. Ann Cavoukian and Dr. George Tomko,
Congratulations on this thought-provoking article!
My daughter Daniela Monte Serrat Cabella shared it with me. I would like to ask if it is possible to send it to my research colleagues at the Department of Computing and Mathematics at the University of São Paulo and to the C4AI-USP-IBM-FAPESP research group here in Brazil?
Thank you for your kind attention,
Dioneia Motta Monte-Serrat
Collaborator Researcher at Computing and Mathematics, FFCLRP-USP
Researcher at C4AI-USP-IBM-FAPESP