The University of Vermont's Independent Voice Since 1883

The Vermont Cynic

The University of Vermont's Independent Voice Since 1883

The Vermont Cynic

The University of Vermont's Independent Voice Since 1883

The Vermont Cynic

We are approaching an AI dystopia

Emma Cathers

Disclaimer: not written with AI

Artificial intelligence is pushing us to the brink of a preventable dystopia and we can’t continue to ignore its many risks.

Having already covered ChatGPT and plagiarism in a column last year, I have continued to observe the developments and evolutions of AI technology.

ChatGPT is a generative AI software that responds to user prompts with answers and solutions.

Many people, including myself, use GPT softwares to maximize efficiency on all kinds of scholarly, professional and personal tasks.

From summarizing readings to drafting cover letters, AI does it all. I even use it to create recipes based on the contents of my fridge.

I am not alone, as 27% of Americans use AI multiple times a day, with an additional 28% using it once a day or a few times a week, according to a Dec. 18, 2022 survey by Pew Research Center.

However, AI technology is not just efficient bliss, as many AI advancements can become easily corrupted by people interested in doing harm. 

For example, with the development of AI voice cloning software, individuals can now easily replace one voice with another, all from a small sample of data.

This has led to a development in AI telephone scams, according to a Sept. 23, 2023 Fox Business article.

Not to mention, voice cloning can have disastrous consequences on the stability of political truth.

But it doesn’t stop at voices. AI deepfakes, or altered videos in which a person’s face is superimposed over another’s body, are seemingly improving day by day.

Especially with 2024 being an election year, this will be the first time we see AI deepfake disinformation in a political context, according to a Feb. 15 article by the Wall Street Journal.

AI is being abused for sexually nefarious reasons as well.

Using elements of this deepfake technology, many different “de-clothing” websites have arisen, allowing anyone to upload a photo of an individual, and seeing them “realistically” naked.

It will not be long before fake AI nudes of individuals spread across campuses and schools.

Inversely, on the right-wing extremist online forum 4Chan, users created an AI tool called “DignifAI” that re-clothes and removes tattoos from images of women, especially sex workers and feminists, in order to portray them as “trad” or traditional women.  

It doesn’t stop there, as some authors are using AI to create “fast-fiction” in order to produce books at a faster rate to fulfill market demand while requiring less effort, according to a Jul. 20, 2022 article by The Verge.

Recently, I’ve been coming across a new kind of AI content: TikTok videos of world monuments seemingly engulfed by flame and damaged, the source of the destruction unknown.

This will be even more common with the Feb. 15th release of OpenAI’s Sora program, a generative AI tool that can generate videos from prompts.

Some of the prompt examples from the OpenAI website include historical footage of the gold rush, reflections in the window of a subway car or stop-motion videos of a flower growing.

All of these examples are almost impossible to discern from their legitimate equivalents, so OpenAI is including a detection device among other security methods to ensure that the abuse of this new software is limited, according to the safety details listed on the OpenAI website.

Of course, proper education in media literacy has always been vital in disseminating social media disinformation.

However, with the dangers that AI can pose to the truth, some places have begun to legislatively limit the uses of AI technology.

The European Union AI Act categorizes AI technology by risk, with “unacceptable” risks including tools for biometric identification and cognitive manipulation. This EU act also insists on transparency in AI-generated materials and media, according to a June 8, 2023 article from the European Parliament.

The U.S. has a responsibility to follow suit in order to protect its citizenry and democracy.

UVM also has a responsibility to act, and though some of my professors have suggested oral exams as an AI-proof alternative, there could also be a return to the pre-COVID-19 policies of traditional written exams and in-class essays.

Another option could be a widespread incorporation of AI tools, like ChatGPT, into academia.

Either way, the effects of advancing AI technology will cause significant shifts in universities, careers and human lives.

So, no matter at what level, the time has come to integrate AI where useful, and legislate where harmful.


More to Discover