Q&A: How Thomson Reuters used genAI to enable a citizen developer workforce

Over the past three decades, Thomson Reuters has relied on artificial intelligence (AI) to help its clients — and its own employees — sift through troves of digital documents to discover those most relevant to the issue at hand.

But when generative AI (genAI) burst onto the scene in late 2022, the company was forced to rethink its AI strategy to stay ahead of competitors and address an insatiable client demand for industry-specific information.

Thomson Reuters in November unveiled its genAI strategy and product rollout after its integration with Microsoft Copilot, along with a $650 million acqusition of genAI tech provider Casetext and a pledge to invest $100 million annually in new genAI tools for internal and client use.

The company’s genAI solution is a cloud-based, API-driven platform that leverages the full scale of the company’s content to enable employees and clients to build new AI skills with reusable components. The Toronto-based company also had to reskill all of its employees to understand how to best use the new genAI platform.

Along with a staff of more than 2,500 journalists and 6,500 photojournalists, the global content and technology company provides data and information to professionals across the legal, tax, and accounting industries, among others.

Shawn Malhotra, head of engineering at Thomson Reuters, led the creation of an “AI Skills Factory,” a low-code way to create new apps with minimal engineering support. The platform enables technologists to design, build, and deploy tools quickly, while allowing non-techies to experiment with genAI safely, making innovation and ideation faster and more inclusive. 

Because of the success of its genAI platform, Thomson Reuters has been able to roll out three AI-enabled solutions for attorneys and other clients over the past three months, and plans more in the near future.

shawn malhorta head of engineering at thomson reuters Shawn Malhorta

Shawn Malhorta, head of engineering for Thomson Reuters.

Computerworld talked with Malhorta about how his company built its AI strategy and how it has helped workers internally and clients perform their jobs.

Tell me about Thomson Reuters and the problem it was facing that genAI addressed? “GenAI was a game changer. We serve legal professionals, tax professionals, risk and compliance professionals, and Reuters News. We have a history of using natural language processing prompts.

“We started by listening to our customers and realizing that many of the things they struggle with and spend a lot of their time on we can accelerate with these tools around large language models (LLMs). In fact, that created a secondary problem for us; there was so much opportunity across all these end markets that the problem we faced is how do we address all of this at the pace the customers required us to? The genAI platform allows us to innovate at the speed our customers need.

“In November, we launched the AI-Assisted Research on Westlaw Precision memo. We have another two [recent] product launches and being able to do that at pace is really enabled by this AI platform, which lets our developers quickly leverage building blocks that they can reuse across multiple products.”

You must have already had an AI team in place. When genAI arrived, who did you add to that team, and who would you suggest other organizations have on their AI teams? “I don’t think it’s much different from other development efforts. You need developers. You need designers. You need product managers. You need your legal team to ensure what you’re doing is what your customers expect it to be. All of the stakeholders you’d expect to have. The difference with AI is whether it’s legal, developers, marketing, this is new to us. So it’s the same types of teams and so they have to have greater depth of understanding AI, which is why training is so important. And they have to be able to solve the new problems that are emerging.

“We had a great organization already focused on AI and then rapidly grew that team [for genAI]… One of the biggest value adds is even [with] technologists who aren’t AI experts and non-technologists, you need to find a way to get them to be able to add value for customers as well.

“If the solution to the problem is strictly hiring more AI experts to deliver for your customers, you won’t be able to deliver new products fast enough. You have to do it in a scalable way, and for us that’s taking that DBI expertise and [using] that to build the building blocks in order to let the non-AI experts deliver AI value to our customers. That’s the only way I see this scaling.”

When genAI became available to the public in late 2022, how was that a game changer? “We’d been experimenting with the previous versions of [OpenAI’s] large language models, as well as other transformer-based models in the past. So, we’ve always had this on our radar.

“What happened in November 2022 was the size and quality of language models at that time passed an inflection point where problems that we weren’t capable of solving before, we’re now able to. We’d tried with previous models, like used them to help an attorney summarize a whole bunch of cases and get to the salient facts effectively. They couldn’t do that well. All of a sudden they were good enough. So what GPT did was accelerate something that was already in motion, and so we had to quickly react to that.”

Tell me about how you’ve been applying genAI in the workplace through your acquisition of Castext and AI-assisted research tools such Westlaw Precision and CoCounsel Core. “I’ll start with Westlaw. What we released in November last year was the AI-assisted research memo. One of the key problems an attorney faces is at some point virtually all of them have to do legal research. That often means entering search text into a search engine in Westlaw in order to find cases that may be relevant to the legal question you have.

“That requires complete, current, and correct content to make sure you’re searching over the right stuff. And it requires you as an attorney to read through all that content. Hopefully, it only surfaced the relevant content, but you still have to read through it. You’ve got to understand if it was actually relevant to the research you were doing. That’s the first step.

“Previously, we’d used AI to help surface the right information. That solved the search problem. Now, what genAI allowed us to do is not only does Westlaw Precision memo find the right information, it now summarizes all of the cases that might be relevant. It gives you the citations you might need so you know it’s coming from trusted content. And then it actually gives you a readable, understandable summary.

“That saves our customers time. And it helps them provide a higher quality product for their end customers.

“Casetext and CoCouncil was created by some great work from the Casetext team. They were actually one of the first companies in the world to partner with OpenAI prior to the release of GPT. That gave them a head start in creating things they’d refer to as AI-skills — so, things that will help an attorney do things like summarizing documents, asking questions of a database of information and a variety of other skills.

“Rather than search for your own content, you just tell us what you want to do and we surface the right content back to you. So it’ll help get AI-assisted answers much more quickly.”

What use of genAI surprised you? “I think it’s just the quality of the result. What we weren’t surprised by was if you used generative AI without world-class, trusted content, you do have problems. But the scale of those problems wasn’t as big as it used to be. We used to say if you took those language models and graded them prior to the recent innovations, they might have been a D student or an F student; they were doing quite poorly.

“When you came to newer LLMs and tried them out of the box in the customer space, it might have graduated a D or C student. So that was a big change. It was a considerable improvement.

“But what we learned is by applying techniques such as retrieval augmented generation [RAG], which allowed us to take our world-class content and the power of an LLM and combine them, now you can make it an A student. The thing to remember, for our customer base, it’s not OK to be right some of the time. They deal with very high-stakes situation where they have to know the content is trusted and current and complete.

“That RAG-based approach was a real ‘aha moment,’ where we found some real value for our customers.”

Which of your data content systems did you have to plug into the LLMs and did vendors have the APIs needed or did you have to create them? “Some of our content is proprietary, so we’re not getting it from customers. So for years, we’ve built APIs that have made it very accessible. This goes back to that genAI platform. One of the components of the Reuters Thomson platform are simple APIs to allow you to access content.

“If you’re a developer who wants to build an application and you want to access legal content, we have that genAI platform [to] give you easy and safe access to that content. So you can just worry about building the business logic for the application rather than build the APIs to get to the content.”

What were some of the challenges you face? “I think it was just speed. There’s such an appetite to solve problems that genAI is capable of solving. It was ensuring we could deliver at the speed our customers needed but in a safe reliable, and secure way. Those two things can sometimes be at odds with each other.”

What is the Thomson Reuters AI Plaform? It is simply your flavor of Microsoft 365 Copilot or is this a fully proprietary platform? “It is separate [from Copilot]. This is something Thomson Reuters developed. It’s a set of building blocks. Each one aims to make it easier for someone within Thomson Reuters to build a valuable genAI application for our customers. Some of the examples of building blocks…allow you to access content in a safe, secure way. Building blocks allow you to build a front-end experience that’s consistent across all our products, which means they’re easy to use.

“There are building blocks that are used to build the prompt for you. So the end developer doesn’t have to understand all the nuance of how to build a prompt for any given large language model. There are building blocks in there that allow you to experiment with different LLMs, so you can try them out and see which one will work well for you. Some of those proprietary models we built ourselves, others are third-party models we think produce good results.

“Some of those building blocks allow you to access the language model in what we call a low-code or no-code way. That means someone who isn’t a technologist, but who is someone who has an idea. So, say I’m attorney editor at Thomson Reuters and I understand the law, and I wonder if an AI model could potentially do a good job summarizing a type of document. Our genAI platform allows them to experiment and answer that question without having to write any code. That’s really powerful, because going back to that fundamental problem with speed, it’s allowing everyone to take part in that innovation and help with ideas.”

AI-augmented code development has often been cited as low-hanging fruit for first-time users of genAI. What did you find? “There’s two things. Regardless of what function you’re in, and this doesn’t just apply to Reuters, genAI has the potential to help augment what we do — to make us more effective. So, if I look at my development team, we’re absolutely looking at generative AI tools that help them write better code and do that faster. That actually increased developer satisfaction. Again, that goes back to speed of developing new products for our clients.

“So, we’re definitely using that within our developers environment. Then you have that same kind of acceleration where someone at Thomson Reuters who really wasn’t capable of experimenting with AI, who wasn’t a technologist, by augmenting them with these low-code, no-code aspects of the genAI platform, they can take part in that ideation and experimentation process, too. So we’re helping them by almost removing the need for them to code in order to allow them access to experimentation with AI tools.”

What are some of those things your non-technical business users are experimenting with? “Whether it’s helping understanding changes in tax law or it’s the ability to pull out salient facts from case law for research, we have so much expertise as an organization on tax, on risk, fraud and compliance, on the law on the news. What we’ve done is those SME’s are now capable of whatever business problem they’re trying to solve, and there are a lot of them; they can now figure out whether these genAI models are good at solving them.”

What security and privacy concerns do you have with genAI, especially since your LLMs are being run in the cloud or in a co-location facility? “Privacy and security have been at the forefront for us since the beginning. If you look at the markets we serve — legal professionals, tax professionals, risk-fraud-compliance professionals — they’re data sensitive. They have obligations to their customers that we have to help them respect and uphold. So security and privacy is embedded into every part of the development process.


This website uses cookies. By continuing to use this site, you accept our use of cookies.