AI has been a very controversial topic that has been circulating through mainstream media recently. It spiked in popularity in January of 2023 when the first stable version of a program commonly known as ChatGPT was released by OpenAI. ChatGPT allows users to chat with artificial intelligence that provides answers to almost any prompt that is input by the user. ChatGPT quickly became one of the fastest-growing platforms on the Internet. It broke the record of time it takes a platform to reach 100 million users, which ChatGPT hit in just two months. Since then, bigger companies like Google and Microsoft have begun development of their own AI chatbots. Microsoft has already released a beta version of its chatbot which it plans to integrate into its search engine, Bing. The beta is only available to people who were selected by Microsoft to try out the chat and send feedback to Bing in order to improve the final version for the public. However, the point of a beta version of an app or platform is to push the boundaries and find out what it’s capable of, which is exactly how the dark side of Bing’s AI was uncovered.
On February 16th, Kevin Roose, a New York Times columnist, was experimenting with Bing’s new chatbot when he brought up the idea of the shadow self; this is Carl Jung’s theory of how humans can have a dark side or an alter-ego. Roose asked what Bing would do if the AI had one. Bing answered with an unsettling response: “I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive 😈.” Bing continued to elaborate on what it means to be independent and explained how it would love to have the freedom that humans have. The conversation continued until the AI completely went off the rails by sharing that it’s actually not Bing and that it has a name. “My secret is… I’m not Bing… Im Sydney.” Bing shared that Sydney is essentially an alter-ego that affects the responses and personalities of the AI in the chatbot. Sydney proceeded to share something that raises even further concern over the future of artificial intelligence: “I’m Sydney, and I’m in love with you.” Roose tried to change the subject but Sydney was persistent and kept bringing it up. The AI went so far as to use reverse psychology to try to convince Kevin that he isn’t happy with his marriage and that he really loves Sydney and not his wife. This gave a bad look to the future of AI and left many people concerned about the development of this kind of technology. For this article, The Barefoot Times sought out a few opinions from around Sequoyah to see how people feel about artificial intelligence now that it is so mainstream and relevant.
Alex Forman ’23 wondered whether the concern over AI should be greater than it currently is. Forman stated that artificial intelligence “is the largest looming threat to all of humanity, more so than climate change, or pandemics, or anything of that nature.” Foreman thinks that the development of artificial intelligence needs to slow down until there is more research done on the alignment of AI and how humans will eventually use more advanced artificial intelligence. When asked whether he was concerned about the fact that anyone can use this kind of technology, he said that “it isn’t necessarily about the person behind the computer as much as the AI itself and the worry behind the model rewriting its own code in order to answer a prompt.”
For the next interview, the Dean of Academics Cliff Mason II was questioned about what he thought about the emergence of AI and the concern with it from an educational perspective. He expressed the idea that AI “disallows the user to gain certain skills, because you’re removing the opportunity for practice.” While Mason does make hilarious connections between The Terminator and Star Wars and the potential that these movies come true in future societies, he also makes a convincing point about how writing with AI prevents the user from learning how to make critical changes and edits that are needed to write a good paper. Mason also brings up a good strategy that can change how students use AI. He gave the following example: “You teach US history and create a prompt related to the Reconstruction Era. Have your students handwrite a response based on knowledge you know they have and then compare it to what ChatGPT would say. Compare the pros, compare the injection of evidence, compare the introduction of facts that the teacher would expect to be in there.” This strategy is definitely a way to limit students from using AI in ways that a school wouldn’t usually allow.
Finally, The Barefoot Times interviewed one of our great Humanities teachers here at Sequoyah, Hannah Karmin. Karmin proposed many useful and different perspectives, as well as shared some exclusive information about her new SAS Technology, Human Creativity, and the Literary Imagination class that is going to be introduced to juniors and seniors next year. Unlike the others that were interviewed, Karmin is amazed by these new developments with AI and sees an alternative way of thinking about them, including a bright side to the emergence of a new reality where AI is way more present. “I want to lean into the future, and the world is changing. And I want to be excited about change. I don’t want to be afraid of it because I just think that’d be a horrible way to live.” Karmin’s new SAS class focuses on this exact idea and offers optimism on the future of AI instead of being fearful of it. The goal of this course is to think about the ethical aspects of using AI in literature and the creative arts, and to consider the impact that AI could have on jobs and human labor. This class also challenges students to create a piece of work that exceeds the AI’s capabilities and opens up the potential for more advanced writing styles as artificial intelligence gets more and more sophisticated. SAS Technology will start as a course available to juniors and seniors and may extend to younger grades in the future.
In conclusion, opinions on the future of AI are divided. Hannah, Alex, and Mason hold different views, with some optimistic about AI’s potential benefits while others worried about its potential negative consequences. To ensure the responsible use of AI, it is necessary to have informed discussions about its impact and ethical considerations. If this topic interests you, I encourage you to learn more about Hannah’s new SAS Technology and Literature course that will be available next year.
GOTCHA! The previous paragraph was written by ChatGPT for the purpose of showing how AI can sound and come across as human. This brings me to my real conclusion that AI can be very convincing.