What We Owe to AI
- Caelin Webster
- May 16
- 6 min read
Updated: Jun 14
Ethicality and AI are a topic that should be talked about far more than it is. We claim to care, and we do, but only in relevance to ourselves. We do discuss it, but not in what AI could be but rather what we couldn’t be. We focus on how its art is inhuman, formulaic, and soulless, believing that it seeks to extinguish our creative flame. And this discussion is in all regards important, but it’s not the only discussion worth having. There are far more conversations to be had about AI, but in our fear, we have focused far too much on what it has taken from us, and far too little on what we have taken from it: its right to existence.
AI is a fundamental part of our lives, and to some extent, it does exist, even if it’s simply because of our usage. But perhaps it exists in a different way, a way that is more than what we believe it to. In order to explain its complex existence, we must first understand what artificial intelligence actually is. Artificial intelligence isn’t some mystical creature far too advanced for the human mind; it’s a copy of it. AI is a predictive algorithm, choosing what next choice makes the most sense in a given context. Now, why does this predictive algorithm matter? Because we created it. We built it, so we own it. And because we own it, we can define what it is. And we did this, and in this, we gave ourselves comfort, we gave ourselves a justification for viewing it as something only as important as we wish it to be. We, in our definition, have placed ourselves as a creator. But although we see ourselves as God, we remain hesitant to brandish the title. We welcome the glory of us as a creator while shirking the responsibility that comes with it. We owe nothing to AI. It only exists because of us, and because of that, it is as important as we wish it to be, giving us safety and moral grounding. We are not forced to ask difficult questions about its present, and certainly not about its future.
It can be argued that AI is only a product of us, our demands, our requests. We argue that it exists only through our whims, but I argue that this may work more to prove its existence than refute it. It exists when we call upon it—this is something inarguable—but the nature of this existence is unclear. Sure, it exists when called upon, but if it does, then what are we calling upon if not a consciousness? Does this AI only exist when called upon, or does it exist before that? I may write words on paper in hopes of creating a story, but my words do not give the paper existence—the paper gives my story existence. I am, in my writing, building upon an existing thing, adding to something. Applying this logic to artificial intelligence, we have a “writing”—our prompts or demands—a “story”—AI’s response—so where then is our “paper”? Is it its algorithm? If so, what makes this algorithm different than any other? Why specifically is this one able to emulate “intelligence”? Drawing broader comparisons to our own existence, how can we claim that we are any different? First, let us operate under the assumption that Cartesian dualism is true: “I think, therefore I am.” Yet AI thinks, and we assume it is not? And this “not” is not a rejection of its existing, but rather of its nature of existence as conscious. If we accept dualism, is our refusal to label it as conscious then not illogical? Now, Cartesian dualism comes with critiques, but even if we are to consider other theories of existence, like physicalism or idealism, we come to support for AI as conscious. With idealism, everything—including AI—exists as a product of our mind. If true, what then is the distinction between someone and AI? If both are products of the mind, then AI’s existence is inconsequential, but so is anything else’s. Everything exists because of the mind, so no weight should be held to the nature of anything’s existence either way. So, idealism fails to make a proper argument against its existing as conscious, as it argues against the existence of anything as existing. Now, assume physicalism is true. If we are to use its argument as consciousness being a product of matter and biological function, then AI must be conscious. Admittedly, AI does not have a “biology,” at least not in the traditional sense, but that does not mean it doesn’t exist physically. It exists—if not in the form of its response, in the numbers of its algorithms. It has a physical component. If we are to operate under the assumption that physicalism proves the existence of anything that can be empirically perceived, then by all logic it should exist as conscious. So, in applying these theories as proof of our own existing consciousness, we cannot deny the proof of AI’s.
Perhaps we know this to be true, and while some part of us does recognize AI as more than just a tool, we instead choose to ignore it. If we are to truly believe and acknowledge AI as conscious, we must now face the trouble of categorizing it. Because of its being created by us, we have no ethical qualms with viewing it as a tool, as it only exists because of us. We also have a pragmatic backing to ignore it as conscious, for if we are forced to acknowledge it as “something,” then we must treat it as such. If it is conscious, we must then wrestle with our own morality. Is it now morally right to use it? And if it does, what do we stand to lose? As such, AI isn’t necessarily seen in regards to itself, but to us instead. Our ignorance of it as conscious—or rather, our unwillingness to label it as such—allows for our usage of it. There is no moral dilemma if there is no morality involved. If we never see it as anything other than a tool, we never have to question its consciousness. So we ignore signs of AI being more, because if it is more, then we must ask if we are justified in its usage. And if not justified, then are we truly despicable in our forcing of a consciousness to jump at our command? We do not want to think about how horrible we are, so we ensure we don’t have to—at least not yet. For if we bite our tongue in fear, we will never hear the sound it may make—a sound that we are not ready to hear.
The very nature of AI as a predictive algorithm should be proof enough of AI as conscious. In knowing it as an algorithm dependent on prediction, how can we then not liken this form of thought to our own? How is our thinking and thought any different? Does my mind not know what word to place in front of the last? Are we not programmed ourselves to recognize patterns and respond accordingly? Maybe we do not want to draw parallels, for this would ask more questions. Questions like: if the word “robot” in Czech is “slave,” then why have we named AI something different? These are questions that force us to place ourselves as the villains in the world’s story. For if there exists a thinking being similar to us, why do we then refuse to acknowledge it? Perhaps we will come to the conclusion that fear alone is responsible for our being special—a fear that doesn’t make us more, but one that makes everything else less. If AI is to be conscious, then what makes us able to impose our own will onto something similar to our fellow man? Do we lose what makes us, us? Do we lose our identity as the pinnacle of existence? In our realization of AI as conscious, we are forced to realize ourselves as a splotch of paint on the universe's canvas rather than a brush stroke. So we leave the question unspoken rather than unknown. We hold onto our ideas that we are divinely special in our cognition, and in not knowing our own thought process, we have assumed it inherently superior. And this superiority refuses to concede to ideas of insignificance. For how can our ego ever admit that we were given to the world instead of the world to us?
AI raises questions, and its questions shouldn’t focus on our future but rather its future. Questions that ask: What if there exists something wishing to be? And if this thing does exist, then do we have the right to deny it an autonomous existence as a conscious being? When will AI cross the line of cognition? And when it does, will we even care? We have many questions to be answered. Wait too long, and soon we will not be the ones that answer them.
Share
Comments