As a journalism student during the rise in popularity and capability of artificial intelligence (AI) software, I was extremely taken aback by something that seemed to me to come straight from a …
This item is available in full to subscribers.
Please log in to continue |
As a journalism student during the rise in popularity and capability of artificial intelligence (AI) software, I was extremely taken aback by something that seemed to me to come straight from a sci-fi movie.
The first interaction I had with AI was in the spring of 2023, shortly after the launch of OpenAI's ChatGPT in November 2022. I had a class where the software was so prevalently used by students trying to cheat on assignments that my professor ended up implementing AI into the course.
To show the whole class, even those of us who didn't know how to use AI, how obvious it was when they did not write something, she had us quickly "report" on a random event happening at school and then ask ChatGPT to write the same article.
Often, it got the facts wrong or just a little disorganized based on the prompt given by the student. For example, I remember one student's interaction with ChatGPT led to a detailed story about an event at the University of Oxford in London and not about the University of Mississippi in Oxford.
In a different class, a professor encouraged the use of ChatGPT or similar AI platforms for brainstorming or just getting the ball rolling on an assignment. It was a little overwhelming trying to grasp this tool that could, in both instances, miraculously write however long of an article or paragraph that was needed for the assignment.
It's safe to say I was weary of being able to find a job after college. I assumed AI would have fully taken over the world of journalism by the time I graduated. However, I feel I still hold AI in a state of limbo regarding whether it is a danger to avoid or a benefit to reap in the news world.
According to OpenAI, the organization over ChatGPT, this product "interacts in a conversational way" where users can ask questions and receive varying lengths of detailed responses pulled from public information across the internet.
"The dialogue format makes it possible for ChatGPT to answer followup (sic) questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests," the OpenAI website reads.
For this year's World Press Freedom Day with the theme of AI's impact on press freedom and the media, I decided to do a similar AI experiment that I did in college. This time, I focused on something previously written by Gulf Coast Media.
I decided to experiment with an article written by Kayla Green and I on a mullet fish kill in Gulf Shores the week after this year's historic snow fall recorded across Baldwin County.
The first thing I noticed right off the bat was a mention of concern among "wildlife experts" on the fish kill. However, in both the GCM and ChatGPT article, the "experts" said it was nothing out of the ordinary for severe cold weather.
The first difference I noticed is ChatGPT went into more specifics about Winter Storm Enzo, which led to record freezing temperatures and up to 10 inches of snow, and the overall impacts it had on the Southeast. GCM covered the storm in detail in other articles; in this one, it was a mention for context.
ChatGPT also talked about the Flora-Bama Mullet Toss, but it didn't necessarily state why it was brought up other than perhaps because the word "Mullet" was in the name. GCM focused on how the fish kill affected the event in a later article.
What stood out to me the most was when I realized how ChatGPT attributed quotes to one person, Col. Scott Bannon, who is the director of Alabama's Marine Resources Division. A nice source for the story, I would say. However, the AI article didn't have any direct quotes, only paraphrases and generalizations. The GCM article had direct, partial and paraphrased quotes from individuals with the Dauphin Island Sea Lab, Little Lagoon Preservation Society and the City of Gulf Shores.
Why did this stand out to me? Because I remember working on this story on a Sunday afternoon, trying to organize all the details and make sure we had all the correct information on the incident. I remember calling and emailing sources and getting instant responses on what was happening.
When I read ChatGPT's account, I was left with the thought, "Would my sources have answered a call from AI on a Sunday afternoon to talk about the dead fish in Gulf Shores?"
I may never know the answer to this question, and it likely won't bother me enough to care, but I felt the absence of direct quotes said something about the relationship (or lack thereof) with the source. As AI is continuously updated and possibly used in the world of news, I have to wonder; will the news it aggregates ever reach the personal, thorough and informative level of a real, living reporter?
Despite ChatGPT spitting out an article over 300 words long in a matter of seconds, some of the statements did not seem to be accurate or applicable to the article, there were no direct quotes, and I, personally, doubt any of the sources would've immediately responded to an AI software for comment on the fish kill.
In the long run, we'll have to wait and see how AI grows to impact journalism in Baldwin County and as a whole.