We created an Al tool for journalists. Here are our key takeaways.

25 sept 2023 dans Media Innovation
Blue data steam on a dark blue background

Artificial intelligence (AI) is the new buzzword in journalism. Media professionals and academics alike are racing to figure out how this latest technological innovation will reshape an already precarious industry. 

Will AI be a resource to newsrooms with declining revenues? Will it take away jobs or free up already-overworked journalists to produce high quality stories?  

What Is AI? What is NLP? 

First, let's clarify some definitions. AI refers to the capacity of machines to perform tasks that are usually linked to human cognition and intelligence. In the context of journalism, AI typically refers to applications that analyze, understand and generate text without human intervention. 

Natural Language Processing (NLP) is a subset of AI that focuses on the interaction between computers and humans through natural language. It is also worth noting that "natural language" refers to languages spoken by humans, such as English, in contrast to programming languages like Python. 

Much of the discussion around AI in journalism is based on NLP capabilities. It is through NLP that AI helps journalists summarize articles, translate content and corroborate information. Essentially, all AI applications that use our everyday language are made possible by NLP.  

How we developed an AI tool for journalists

In 2021, I was a part of an interdisciplinary team working on solving an investigative problem. We were trying to parse important information from millions of pages of unstructured data – aka, text. This was made all the more difficult by the fact that we were working with non-English language texts.  We began experimenting with the GPT-3 API, and that was when we had our technical “aha!” moment.

This was before ChatGPT came onto the scene and when journalists were very skeptical of AI. We got busy creating a proof of concept to show the power of this new innovation from OpenAI. 

We began experimenting with NPR articles and developed a summarizer to turn them into quick bullet point summaries, similar to Axios' style of article. We chose this style mainly because we all liked NPR’s stories but they were often on the longer side. The tool we developed summarizes NPR articles as soon as they are published and makes them available on our website, Gist, after a journalist reviews and approves the summary. 

Early on, we realized that our initial model was “hallucinating” when the sentences would be longer than a couple of lines. The summary generated some quotes that made sense contextually and grammatically, but that did not appear in the source NPR article. 

"Hallucinations" in the context of NLP refers to instances where the model generates outputs that are either inaccurate, unanchored in the input data, or plainly nonsensical. In our case, we needed to make sure the quotes in the summary actually existed in the original articles. Hallucinations in a journalistic context could be fatal and can result in misinformation. 

We began making adjustments to the model to prevent these hallucinations. This was an iterative process, as we had to continuously train the model and test it. We also added more journalistic guardrails to the process. This process gave us some key takeaways for future AI applications in journalism. 

Our takeaways

Training a model with journalistic standards in mind was not easy but this long path has made four points clear for us:

1) AI-assisted reporting is doable but AI reporting is not. Human supervision cannot be removed from a journalistic process. 

Human judgment and approval is an integral part of any journalistic process and, while we can use technology to take on the tedious, repetitive tasks, it cannot entirely replace journalists. For example, in our model, every single summary is read and approved by a journalist before it is published. 

2) AI models are as good as their training. To get the best results from any language model, its purpose should be well defined. Then, the most efficient path to training can be identified and the process can begin. 

Models are also not a one-size-fits-all solution. Newsrooms should consider their own unique challenges and requirements as they explore AI solutions, and journalists need to play an active role in training the algorithm rather than leaving it up to developers alone. A model that is trained only on court documents in Texas, for example, might not produce the best results with court documents in Alaska because the style of writing legal opinions is different.  

3) Interdisciplinary collaboration is essential for AI in journalism. Developers without journalists cannot tackle tasks like fact-checking, content summarization and translation alone. And journalists cannot go at it alone, either. 

We need to create collaborative spaces for journalists and developers to work together, and, most importantly, ensure that journalists get a say in how AI is used in their profession. 

4) Current models can achieve great performance using only training data from one's organization. We found we could match the style and quality we wanted while training on only data we created. 

Quality examples crafted by journalists have a lot more value and performance than scraping similar content from other sources. This process should not be a single step, but rather teams should identify current model weaknesses and craft additional examples that help demonstrate the correct behavior.


Photo by Conny Schneider on Unsplash.