Scaling Research with Generative AI
Even with an artificial mind at your fingertips, there’s no shortcut to quality. Making generative AI work for you involves hands-on experience. Tools such as Gemini and ChatGPT don’t automate so much as escalate: used correctly, they help writers and their teams supercharge their creativity in a streamlined process.
The following is an interesting case study. Recently, our team was tasked by a client with writing material to pitch their products to 20+ different audiences. For each audience, we needed to adapt descriptions of ten different products in order to make a bespoke pitch. The challenge: our team needed to generate almost 200 different ~150 word paragraphs, spread across 20 different audience documents. And this required a lot of research.
We figured that Gemini, Google’s generative AI tool, could be useful in helping us scale research. However, we needed to use it in a methodical way in order to keep everything procedurally above-board.
The Constraints
This much was clear from the get-go: we could not use generative AI to write the blurbs for us. All ~30,000 words needed to be original, carefully crafted content composed by our very human team members.
The reasons should be clear, but are worth an overview. Most importantly, our clients deserve a transparently produced quality product — one made by the team they paid to do it. Our company policy surrounding generative AI makes it clear that our material is produced by human writers.
There are other reasons to be cautious about using generative AI. Firstly, there’s the issue of accuracy. Notoriously, large language models sometimes “hallucinate”, generating inaccurate information with an air of confidence. And even when large language models do generate accurate explanations, these explanations have dubious copyright status, which could prevent the client from having rights to the content produced in their name.
We couldn’t use Gemini to compose our final written product, and we couldn’t rely on it as a source. But aside from those, what good can it do? Quite a lot, in fact!
Getting The Ball Rolling
What made AI helpful in this case is that what was needed was breadth of common-sense insight, not depth of specialized knowledge. Our client had already supplied us very detailed information about their products: what was left was a need for a general sense of the priorities of a wide range of sectors.
To start, we fed Gemini, Google’s generative AI program, the following prompt: “What are common pain points in [such-and-such sector]?”. When asked about the utilities sector, for example, Gemini produced the following list of pain points:
Aging infrastructure: Many utilities have aging infrastructure that is in need of repair or replacement, leading to increased maintenance costs and potential disruptions.
Rising costs: The cost of materials, labor, and energy has been on the rise, putting pressure on utility companies to maintain profitability.
Increasing regulatory burdens: Utilities are subject to a complex regulatory environment, which can be time-consuming and costly to comply with.
Climate change: Climate change is leading to more extreme weather events, such as hurricanes, wildfires, and floods, which can disrupt utility services and increase costs.
Cybersecurity threats: Utilities are increasingly vulnerable to cyberattacks, which can disrupt operations and compromise customer data.
Customer expectations: Customers are becoming more demanding, expecting reliable, affordable, and sustainable services.
Transition to renewable energy: The shift towards renewable energy sources presents challenges for utilities, such as integrating intermittent sources into the grid and managing the transition from traditional power generation.
What all these points have in common is this: they’re fairly obvious. You’d expect utilities to be highly regulated. And it stands to reason they’d be among the most concerned with renewable energy.
However, this is precisely why generative AI proved useful here. Finding the pain points of any one sector is easy: finding the pain points of twenty different audiences is time consuming. Think of it as brainstorming at scale.
It’s also important to note that the pain points listed above are all common knowledge. This means that sourcing isn’t an issue: AI has given a list of points that any reasonably intelligent person could have come up with themselves, given time.
To get things started, we repeated the prompt about pain points for each target audience, giving additional context in the prompt when necessary. For each type of product, we also asked Gemini to list some reasons why the audience might be interested in such a product.
Sorting Through Responses
The points generated by Gemini were sometimes irrelevant, and occasionally outright false. After copying responses over into a word doc, we removed any points which weren’t helpful in positioning our client’s products to the target audience in question.
For the remaining points, we did a quick spot check to determine veracity. For example, quickly googling “utilities companies aging infrastructure” immediately turns up dozens of articles, giving us confidence the AI isn’t hallucinating. These articles didn’t become additional sources, since the common knowledge claim in question was original to none of them, and since more specific facts and figures weren’t necessary for the purpose at hand.
After sorting, we copied all of the AI material into the relevant portions of the draft document shared with our team. For transparency, all material generated by AI was marked with red text. This indicated what material would need to be replaced or rewritten, in line with our company policy.
Reword, Replace, Research
We used the points raised by Gemini as a springboard for our first draft. The wooden-sounding machine prose was replaced by our team with some choice turns of phrase, and we reordered material and added supplementary points as necessary to meet client specifications.
At times, the AI responses were too watered down or generic to be useful. In this case, we did additional research on the sectors in question. While due diligence means some research is unavoidable, we still found that building upon the layer of automated common-sense saved us hours.
Over time, the red text dwindled. Think of it like a literary ship of Theseus, the ancient Athenian ship pieces of which were repaired and replaced until no wood from the original was left.
After multiple rounds of comments, intensive editing, and revision, each piece was done.
A Ladder to Content
This project resulted in an interesting lesson: AI generated text doesn’t need to be used in order to be useful. By generating an initial layer of common-sense insights into various sectors, we were able to build upwards. If you’re done with thinking about AI text like the literary ship of Theseus, how about seeing it as a ladder to be tossed aside once you’ve attained the heights? Scaling research is like scaling walls: what’s important is the vantage point attained, not the means used to attain it.