A.I. is everywhere, even in your Snapchat app
Snapchat recently released an artificial intelligence (A.I.) component in the form of an in-app contact. Since being released to all 750 million users, there have been mixed reviews that range from amusement to anger. Although it is marketed as a helpful “companion” that can answer questions and give advice, many individuals have had serious concerns regarding privacy and the ethical grounds of having a “robot friend.” Although the A.I. claims that it does not have access to users’ locations, it is able to recommend restaurants and activities that are closest to them, implying that it does have access to this personal and sensitive information. Although Snapchat is largely based on its location features, the concern is born out of the AI’s denial of knowing information it clearly can access. Some users, however, do not seem to have any concerns with the A.I. Anna Gasparakis, a Park High senior, has had a lot of fun with Snapchat’s latest feature. “Whenever I am feeling angry, I will yell at my A.I.,” states Gasparakis, “I think it’s very healthy; it’s better than keeping things bottled up.”
There are many different ways that people have been using the A.I. and, whether it is to get out one’s anger, in the case of Gasparakis, or to get homework answers, almost all of them are cause for concern. By making the A.I. appear as a person’s contact, it crosses a line between robot and friend. The Snapchat CEO, Evan Spiegel, quoted in The Verge, an online technology journal, said, “The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day.” This confirms that Snapchat is trying to humanize the A.I. and normalize speaking to it. The very nature of Snapchat introducing A.I. as a contact makes it seem like people are texting one of their friends with a few questions. But the Snapchat chatbot is not a friend; it is a programmed machine. This, however, has not stopped people from forming attachments to the A.I. Gasparakis mentioned that she likes the feature because, “This robot actually really cares about you as a general person.” I, however, do not believe that the Chatbot cares about the users. In fact, I think this is a dangerous mindset, especially for young people who form an attachment to the A.I.
Another issue arising from this feature is that it creates a further dependance on Snapchat. In the same article, Spiegel stated, “It’s a bet that AI chatbots will increasingly become a part of everyday life for more people.” Not only will people open this app to chat with friends, but now they will also open it when they would have previously searched Google. The difference between Snapchat’s A.I. and a Google search is that the A.I. gives one, simple, “factual” answer that leaves no room for debate or cross-referencing multiple sources. It claims to be an objective fountain of knowledge but what people often forget is that this robot was originally programed by a human, a human with subjective feelings and beliefs. The A.I. has already been shown to carry certain political and social biases. Students across the country have also been using the A.I. for help with homework and even to get outright answers. This is one of the main concerns surrounding the use of artificial intelligence in a school setting. Although schools can block certain websites such as ChatGTP, it is not as simple to ban Snapchat within the school. It seems that artificial intelligence is only moving in one direction and unfortunately Snapchat’s A.I. appears to be just the beginning.