Media

How technology is powering environmental reporting |


This is part one of a series of three blogposts about the first year of the new Investigations & Reporting team, a small group of software developers embedded in the Guardian newsroom. Part two will be published on Tuesday 19th January.

In the summer of 2019, the Guardian assembled a team of engineers to work alongside reporters to build tools to help them discover new stories. We were tasked with finding opportunities to collaborate closely with our editorial colleagues.

Such collaborations are not new at the Guardian. Our editorial tools team builds software for writing and publishing journalism, and our data journalism and visuals teams write bespoke code for individual stories. And our team wasn’t starting from scratch – we already had Giant, the Guardian’s in house tool for securely searching and browsing data relevant to complex investigations. But we believed much more could be done at the news-gathering stage of the journalistic process. We needed to work alongside editorial staff to better understand the technical challenges of the newsroom.

An opportunity quickly presented itself. In October, the Guardian was preparing a landmark series focusing on the fossil fuel industry, and the structures behind it, which are driving the climate emergency. We were put in touch with environment correspondent Sandra Laville, who was looking into supposedly grassroots organisations that were in reality funded by fossil fuel companies. Sandra wanted to know if online advertising by these groups had affected the passage of environmental legislation.

Sandra and fellow Guardian reporter David Pegg were interested not just in what adverts had been run but at what scale: how many people they had reached, in what areas, and how much the groups had spent.

Facebook provides a basic user interface to their Ad Library.

A screenshot of Facebook’s Ad Library.
Facebook’s Ad Library. Illustration: Facebook

But this UI didn’t let us filter or summarise the data in ways that would make the salient bits stand out. For instance, individual ads have information against them such as funding body, impressions, spending, demographic and regional targeting; but back in 2019, shortly after its launch, the UI did not allow you to sort, filter or aggregate by these categories. (Some but not all of this functionality has since been added by Facebook). If we pulled the data directly from the Ad Library’s API, we could fill in this missing functionality with our own tools.

We quickly wrote a scraper that queried the API for a set of search terms and stored the results. Sandra and David needed the data presented in a way that would allow them to scan for leads, so we modified our scraper to convert the JSON data to CSV and manually uploaded it to Google Sheets.

Diagram of turning Facebook ads into a spreadsheet via a script running an engineer’s laptop.
Turning Facebook ads into a spreadsheet via a script running an engineer’s laptop. Illustration: Joseph Smith

However, spreadsheets are not always ideal. For this project, we wanted to cast the net as wide as possible by querying Facebook’s API for very generic terms such as “fracking” and “jobs” and then rapidly filtering within this large set of results. Spreadsheets are not the best way to experiment with different filters on huge datasets. Instead, we turned to software very familiar to us as developers – the ELK stack. Elasticsearch sits at the heart of our audience analytics tool, Ophan, and developers at the Guardian use Logstash and Kibana every day to interrogate our application logs.

With ELK we could quickly try different filters of the Ad Library data, then create visualisations from them. This allowed us to have rapid-fire sessions with reporters to try out ideas and identify leads. We were looking for examples not only of big spending or big impressions but also of adverts that we could confidently tie back to particular issues or organisations.

Diagram of uploading Facebook ads to Elasticsearch and browsing them with Kibana.
Uploading Facebook ads to Elasticsearch and browsing them with Kibana. Illustration: Joseph Smith

We ended up focusing on two pieces of defeated legislation: Proposition 112, which aimed to set minimum distances between oil and gas projects and other buildings in Colorado, and Measure G-18, which would have banned fracking in San Luis Obispo county California.

There were nuances to the data. The Guardian’s data projects team helped us avoid some pitfalls, particularly when it came to aggregations of spending and impressions, both of which Facebook provides in ranges rather than in exact values. For the final story we were able to use the combination of the lower bound and total count of adverts to put specific numbers on adverts placed by organisations such as “Protect Colorado” and “No on Measure G”, where funding for those organisations was traced back to large fossil fuel companies.

It was exhilarating to be building software, although simple, at the pace of the newsroom and to meet tight publishing deadlines. We had proved that we could help on a live project. As it turned out, we soon took what we had learned and applied it to one of the biggest editorial projects, the UK general election, which we will cover in our next engineering blog.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.