The landscape of journalism has been in flux ever since the birth of the internet and social media. Apps have replaced newspapers; live news broadcasts have been swapped out for bitesize video clips. Even if you aren’t actively engaged in current events, all it takes now is a quick scroll on your preferred social media to be inundated with highly personalised stories and attention-grabbing headlines, and the constant fluctuation of how people consume news has left many news outlets scrambling to find ways to keep up. Buzzfeed News and Vice News Tonight, once trailblazers in modern-day reporting, have been the latest to close their doors, and many others have been forced to rely on subscriptions, advertisements, or the deep pockets of their owners to stay in business.
The rapidly evolving enigma of Artificial Intelligence (AI) now presents a new challenge to even the most reputable and steadfast of media organisations, and the fear instilled by scaremongering headlines has led many to question whether journalism will be able to adjust.
The positives and negatives of AI
Whilst AI in its most basic form has been around for some time, the recent shift to chatbots powered by large language models (LLM) has made it far more accessible and consumer-friendly than ever before. Chatbots like ChatGPT and Bard AI can now produce entire articles at a fraction of the speed that humans can, utilising a multitude of sources and with little to no grammatical errors. The speed and depth of its data analysis also allows it to spot trends and irregularities which would take a person months of research and scouring databases to find.
In its current form, however, and for the purpose of journalism, AI is overrun with drawbacks. It appears to regularly fabricate information, with entire cited sources ceasing to exist. From a reader’s perspective, AI-generated texts feel very synthetic; lacking in originality, nuance, and contextual analysis. At risk of stating the obvious, it is also incapable of conducting interviews, following leads, and its fundamental nature as predictive technology inhibits it from seeking out unexplored stories and perspectives. There is also the extremely important question of ethics, with evidence surfacing that AI has already adopted some of humanity’s worst biases.
As far as AI being a threat to journalism specifically, media groups are growing concerned with the fact that their content is unknowingly being used to create generative AI, and questions are beginning to arise surrounding how they will be compensated for this. A new Google AI feature is also in the works that aims to produce an AI-generated summary before any further search results, meaning many publishers will lose out on valuable site visits in an era where many companies’ success correlates directly to clicks. The resounding issue that the above problems come down to is credibility and truth; a premise on which the fundamentals of journalism are based, and of which the dissemination of news boils down to in all its forms.
AI’s threat to credibility
Most experts appear to agree that, in its current form, AI is unfit to use for breaking news. Whilst the factual discrepancies in the content generated by chatbots aren’t resounding (at least not always), that just doesn’t cut it in the newsroom. Media organisations simply can’t rely on AI to distribute news if there’s even the slightest chance it could be incorrect. That should go without saying, but some outlets have come under fire for their undisclosed use of AI, with many citing the “supercharging of disinformation” as a threat to journalists and the wider public alike. Without regulatory frameworks in place for the use of AI in journalism, there is a huge risk that the utilisation of this technology will breed a lack of trust in media organisations, regardless of who is and isn’t using it.
In the same breath, this could lead journalism to go full circle. If there is a proportion of news outlets using AI and AI is deemed uncredible, what we may see is a resurgence in the popularity of a handful of historical news platforms that maintain reliance on human journalists. There is also the very real possibility that organisations won’t want to rely on AI at all; if there are any discrepancies in data published by an AI bot, it is much harder for the entire organisation to absolve themselves of responsibility rather than simply having a single person’s name attached to the feature.
How can we work with AI?
Whilst the recent rapid advancement of AI technology is sending alarm bells ringing industrywide, it also presents a new opportunity to acknowledge the pitfalls of journalism and work with it to counter them. In its current form, AI provides industries with an invaluable tool to work faster than ever before, and from a journalism perspective, experts are suggesting it can be used to develop something called “event detection systems”. Applied XL, a self-proclaimed event detection company, offers an example of what can happen when AI is used in conjunction with the human mind. They’ve achieved much success from using AI’s rapid data analysis capabilities to recognise events sometimes a year before they happen.
Under human supervision, AI software can be used to capture and compute real-time information quicker than us mere mortals can even read a headline. It may be a battle between media organisations for who will be quickest off the mark to adopt the new technology and use it in a restrained and resourceful way, but what may come out of it is the ability to bring breaking news to people faster and predict future events more accurately. What we all have to beware of, however, is making sure that the correct regulatory frameworks are in place to ensure AI does not get out of hand and become a cautionary tale of what happens when new technology is not effectively moderated and mediated.