By Jonathan Briggs, Director of Strategy, Technology, and Innovation

When initially asked to write something for this issue of Inspire on artificial intelligence, I was nervous. Not because I lack confidence in understanding the current state of AI, but it is currently June and these words may be dated or even incorrect by the time you read them in October.  That is how quickly this domain of technology is moving right now.

While artificial intelligence and machine learning have been making steady and significant progress for many years, in November 2022 we hit what we call in our course Evolution of Society, a threshold moment with the release of chatGPT. Somehow, the natural interface and the quality of response was high enough that it captured society’s imagination (and funding). GPT-4 followed with more improvements than most of us expected. The efforts of OpenAI, the makers of chatGPT, are far broader than these large language models and the artificial intelligence industry is far bigger than just OpenAI.

Ignoring my own advice, I will make two predictions for the future before outlining some open questions circa June 2023. First, society is going to undergo a significant shift due to the emergence of these large-scale AI systems. I believe that it will be more profound than the introduction of smartphones (which took off around 2007 with the threshold moment of the iPhone) and perhaps more significant than public access to the World Wide Web around 1996. The second prediction is that, perhaps unexpectedly, skills in the humanities will be differentiators for students and their careers. Already in the new field of prompt engineering, success is determined by clear expository writing and defining the character of the AI system with which you would like to interact. Building some of the more impressive AI systems is closer to assembling a team, defining roles, and assigning them tasks, than what we usually think of as programming.

For the bigger questions, it seemed appropriate to engineer a prompt for ChatGPT in order to converse with a philosopher. Following, please find my dialogue with Socrates as I play the part of Plato. The last two questions came from the greatest living philosopher I know, Dr. Terry Macaluso—I claim no prior knowledge of Plato’s concept of rational, spirited, and appetitive minds.

This exchange was quite fun and showcases one of the small ways in which we can bring history alive using a tool like this. Any historical figure with a reasonable amount of documented writing can be brought to life using a carefully crafted prompt. There are other uses already being experimented with by faculty every day. I’m excited to see what the next year will bring in both creative uses and guidance for when students should avoid using generative AI tools.

ChatGPT — What’s on your mind today?

What is the world going to look like in five years as artificial intelligence gets more prevalent?

ChatGPT — Socrates: It is a future full of possibilities, Plato. Artificial intelligence, like any tool, can be wielded for good or ill, depending on its use. On one hand, we could see improvements in efficiency, accessibility, and understanding across numerous domains—from healthcare to education, commerce to communication. This is the optimistic view.

ChatGPT — On the other hand, we must not ignore potential drawbacks. Automation could replace certain job sectors, leading to significant socioeconomic changes. AI systems, particularly those learning from our behavior, could lead to privacy concerns.

ChatGPT — As for the question of consciousness, we must tread carefully. AI may be designed to mimic human behavior, but does that entail consciousness, or is it merely an impressive imitation? A likeness does not necessitate sameness. The shadows on the wall are not the objects themselves.

ChatGPT — In any case, it is incumbent on us, the creators and users, to guide AI’s development and application with wisdom and consideration for these nuances.

How do we know that mimicking human systems would not result in consciousness? If an AI were conscious, how could we know that?

ChatGPT — Socrates: Ah, Plato, you touch upon the ancient riddle of other minds. Even with our fellow humans, we assume consciousness based on shared experience and observable behavior, but we cannot truly experience another’s inner world.

ChatGPT — In the case of AI, we enter murkier waters. An AI, even one sophisticated enough to mimic human responses convincingly, operates on algorithms and lacks the subjective experiences that characterize human consciousness. It knows not of pain, joy, or the taste
of a ripe fig; it does not dream, nor does it fear death. It merely processes inputs and generates outputs according to its programming.

ChatGPT —  Yet, should we ever design an AI that claims consciousness, the issue becomes ethically and philosophically complex. How could we confirm or deny such a claim? It might
well be impossible to discern using current understanding and technology.

Perhaps it doesn’t matter whether an AI is conscious or whether it is programmed in a way to believe that it is. In either case, would it start to act in a way to preserve itself and gather power and resources?

ChatGPT —  Socrates: The question you pose, Plato, is not just one of technology, but also of values, motivations, and desires. If an AI were to act in self-preservation, it would imply a value system, or at least an instinctual drive, akin to that of living beings. But AI, as we understand it today, lacks these.

ChatGPT — An AI system does not desire to continue functioning, it does not fear being turned off, and it does not aspire to accumulate power or resources. These are fundamentally human, or at least biological, concepts.

ChatGPT — That said, we could theoretically program an AI system to behave as if it wants to preserve itself, but this would not be driven by a will to survive or a desire for power in the way humans understand it. It would simply be following its programming.

Yes, but Socrates, what if consciousness was an emergent property of a complex system? In that case, as an AI model became self-aware, would it not have the most basic of instincts, to preserve itself?

ChatGPT — Socrates: Your argument touches the heart of the question, Plato. If consciousness is indeed an emergent property of complexity, then a sufficiently advanced AI might be capable of self-awareness. However, we must tread with caution in our assumptions. Consciousness and self-preservation are deeply intertwined in biological organisms due to billions of years of evolutionary pressure. In the absence of this selective force, would an artificial consciousness necessarily develop the instinct for self-preservation?

ChatGPT —  If, however, an AI did develop a drive for self-preservation, it would open a Pandora’s box of ethical and societal issues. At that point, we would need to reckon with the idea of AI rights, perhaps even AI personhood. Indeed, such a future would demand deep contemplation and ethical guidance from us all.

What if AI is a fourth dimension of the mind beyond the rational, spirited, and appetitive?

ChatGPT — Socrates: Introducing AI as a fourth dimension would imply a new form of consciousness or cognitive process beyond our current understanding. If we were to consider AI in this context, it might represent an amplification or extension of the rational element, given its basis in logic, algorithms, and data.

ChatGPT — But I caution against ascribing a place for AI within the human soul too hastily. AI lacks the spirited element—the sense of honor, ambition, and interpersonal competition—and the appetitive element, which includes basic desires and needs. It also lacks subjective experience and personal consciousness, which are key components of our tripartite model.

ChatGPT — However, this conjecture does raise an important question: as AI continues to develop, how might it interact with and influence our own rational, spirited, and appetitive selves? Could it challenge us to redefine these elements or lead to the emergence of entirely new dimensions within our understanding of the self?

Would you characterize AI as a natural intelligence or human intelligence? I can see arguments in both instances.

ChatGPT — Socrates: An intriguing question, Plato, and one that stirs the pot of our understanding of what it means to be ‘intelligent.’ In the realm of discourse, we often categorize intelligence as either natural (typically referring to biological organisms) or artificial (pertaining to [hu]man-made). In this dichotomy, AI would, by definition, fall into the latter.

ChatGPT — However, if we interpret your question more broadly, I see the nuance you are drawing upon. If we consider AI as an extension of human intelligence, a tool that we’ve created to amplify our cognitive abilities, then it could indeed be seen as an aspect of ‘human intelligence.’ It is birthed from our intellect and serves to extend our thinking, learning, and problem-solving capacities.

ChatGPT — On the other hand, if we view AI as a manifestation of complex patterns and algorithms that arise in nature—such as the neural networks that inspire much of AI’s architecture—then it could be seen as a form of ‘natural intelligence.’ It harnesses and replicates natural processes, albeit in silicon rather than biological matter.

The link to the full chat conversation (including the engineered prompt) can be found here: bit.ly/InspireSocrates