This article is by Dr. Daniel Kuebler with The Purposeful Universe
Just like the steam engine, the automobile, and the internet, artificial intelligence stands to be the next great disrupter of society. It will transform the way we work, the way we educate, the way we do science and medicine, and the way we view social interactions.
The Large Language Models breakthrough
The latest breakthrough, large language models (LLMs), is the culmination of years of work that involved developing increasingly sophisticated algorithms and curating mountains of data for the AI programs to sort through and learn from. The research has resulted in recent LLM releases from OpenAI, Google, and a host of others that are able to produce in seconds answers to a variety of queries that would have required hours or days of human research and review.
What are the ethical implications of ChatGPT?
In rolling out ChatGPT, the most talked-about LLM, earlier this year, and its even more capable successor, GPT-4, OpenAI unleashed a firestorm of questions (and some answers) regarding the ethical implications, potential misuse, and limitations of AI systems.
As we continue to explore and integrate these powerful tools into various aspects of our lives, from education to healthcare to social media, it is essential that AI technology is harnessed responsibly, ethically, and in a way that complements and enhances human capabilities without encroaching upon or even undermining our unique qualities.
Large Language Models in the classroom
As a college professor and a scientist, it is clear that LLMs will significantly impact both my classroom and my research. Students are now able to “produce” essays in just minutes, although current AI versions often write papers marred with circular reasoning and unenlightening examples (not unlike a typical undergraduate). But as LLMs improve, educators will need to rethink writing assignments — meaning the impetus is now on the instructor to ensure that students are using their own readings, words, and observations for assignments and not the chat box in an AI platform.
But these programs can also be tools for learning. Asking students to critique and expand on AI-generated essays can teach critical thinking skills and help them spot lackluster explanations or equivocal conclusions.
Artificial Intelligence as a tool to amplify human ingenuity
AI holds immense promise regarding scientific discovery — not to replace the human role in scientific discovery but to augment it. From the great Renaissance thinkers exploring the cosmos to the current generation of scientists probing the building blocks of matter and mapping genomes, it is the exploration of the unknown, the drive to understand the world around us and make sense of its myriad complexities, that is the hallmark of the scientific enterprise, and this is a uniquely human characteristic.
AI is merely a tool, albeit a powerful one, to aid science. It will not replace the role of humans in scientific discovery, but like Galileo’s telescopes, it will help us uncover more of nature’s secrets. AI allows researchers and physicians to see trends in data and outcomes previously overlooked, thus making breakthroughs attainable in shorter time frames than ever before.
But for all the promise of this new technology, it is also essential to recognize the aspects of human thought that AI will not be able to replicate. At the top of the list are self-awareness, morality, and creativity.
AI will never possess self-awareness
Artificial intelligence (unfortunately for any Terminator fans reading this) cannot replicate the human condition of self-awareness. This distinct quality of consciousness allows us to understand our thoughts and emotions and recognize ourselves as individuals. While higher-level organisms like canines and primates share conscious experiences with us, our much greater sense of self-awareness makes us unique (at least on planet Earth).
At this point, consciousness, in general, is a mystery that science has yet to solve. Self-awareness is even less understood and is what philosopher David Chalmers has labeled “the hard problem.” With what little we know about the source of consciousness, there is no reason to believe that advanced algorithms, faster processors, and more data will spontaneously, or even gradually, result in computers becoming self-aware.
Artificial Intelligence can never possess moral agency
Because morality is so deeply intertwined with conscious experience, an unconscious artificial intelligence will not possess moral agency (i.e., the ability to make and be responsible for moral choices).
While it is possible to program AI with specific moral guidelines, these systems will ultimately be limited to following predetermined rules. For us, morality is shaped by our innate sense of right and wrong and our continuing reflection upon our actions and judgments. This understanding draws from societal cues, interpersonal relationships, and day-to-day life as people living in communities and groups. New data or training can indeed change an AI system’s responses to issues of morality. Still, given its lack of self-awareness, it will never be able to genuinely comprehend or engage in authentic moral reasoning, as it cannot experience emotions or empathize with the consequences of its decisions for others.
Artificial Intelligence lacks true self-expression
The ability to willfully create is also something that requires self-awareness. AI may be programmed to create art, poetry, or stories, but it lacks the innate drive to express itself. While AI-generated creations can be impressive and seemingly creative, they ultimately remain a reflection of human ingenuity rather than an authentic manifestation of the AI system's own desires, emotions, or experiences. This distinction highlights the importance of valuing and nurturing human creativity, even as we continue to develop and utilize advanced AI technologies.
An exciting future ... with great responsibility
Artificial intelligence will reshape the way we work and interact over the coming years. This new field will undoubtedly advance scientific discovery and human productivity if used properly. Nevertheless, we should not cede to AI what is rightfully ours. A blind calculus, no matter how advanced, can only mimic self-awareness, moral judgment, and creativity. Let us not be so dazzled by our new invention that we start believing it has a mind of its own — it doesn’t.
“Technology is nothing. What’s important is that you have faith in people.” Steve Jobs.