22 ways ChatGPT could be used in economics research

Large language models can help researchers do small tasks not worth giving to a human.
Econ research assistant.
Econ research assistant.
Photo: Florence Lo (Reuters)
We may earn a commission from links on this page.

Large language models (LLMs)—like ChatGPT—can help economists do better research, if they know how to use them correctly.

This is part of the premise of a recent paper published in the National Bureau of Economic Research by Anton Korinek, an economics professor at the University of Virginia.

To illustrate what LLMs could do in economics research, Korinek used the most powerful system available, which is currently GPT-3. This is “slightly more powerful” than ChatGPT but has a similar output, Korinek added. Like OpenAI’s ChatGPT, the GPT-3 system was trained on public data up until 2021 and cannot access the internet.

Korinek sees research potential in LLMs because they can generate content and consume large amounts of text faster than humans. LLMs can produce text that sounds authoritative but may be inaccurate. Therefore, he says, humans—who are better at examining and evaluating the accuracy of content—should use them carefully.

All of Korinek’s suggestions are related to “micro tasks”: small jobs that researchers do every day, but that are too minor to assign to other human assistants.

“I have found that many of my students are already quite well-versed with ChatGPT and have used it for lots of different tasks,” Korinek said. “In fact, some of the examples in my paper were inspired by my students. And I also learned from my students that they use language models not only as assistants but also as tutors.”

Korinek’s fellow faculty are more split on ChatGPT. Some use it as he does. Others dismiss it because of its faults, and others haven’t tried it.

“My goal in writing the article has been twofold: to expose the regular users of language models to a variety of different use cases and to try to win over some of the skeptics,” Korinek said. “I believe that we as a society have so much to gain if we use these tools responsibly to enhance our productivity and accelerate scientific progress.”

Here are 22 of Korinek’s ideas for how economists can use the technology:

Ideas for new areas of research

  1. Brainstorming. Economists can ask key questions to ChatGPT about issues that relate to a broad set of data. Korinek asked GPT-3 to brainstorm economic channels through which advances in AI would increase inequality, and it responded with 10 examples, including increased surveillance of workers and increased used of AI-driven algorithms to optimize pricing, which would lead to higher inflation.
  2. Evaluating ideas. ChatGPT and GPT-3 can also comment on the usefulness of a research direction.
  3. Providing counterarguments. Since LLMs do not care what side of argument they land on, they’re just as good at providing arguments for or against a point. This allows LLMs to avoid the confirmation bias that can blind humans.

Writing economics research

  1. Synthesizing text. LLMs can take rough bullet points and translate them into text. They can also write text in an “academic style” when instructed to, and they can also write text in LaTeX format, which is a software system for document preparation.
  2. Editing text. LLMs can revise text and explain its revisions so that native and non-native speakers can get a better grasp of writing well in a language.
  3. Evaluating text. LLMs can evaluate a text’s style or clarity.
  4. Generating catchy titles and headlines. Economists can give their paper’s abstract to an LLM and ask it to generate the paper’s title.
  5. Generating tweets. LLMs can also review a paper’s abstract and offer an economist’s a list of tweetable chunks that would allow them to promote the work on #EconTwitter.

Background research

  1. Summarizing text. LLMs are good at breaking down large chunks of text into easily digestible chunks.
  2. Literature research. LLMs frequently make up papers that do not exist, so any requests about literature in a specific field should be checked, but they are able to provide references that are frequently cited in literature.
  3. Formatting references. LLMs can format legitimate papers into whatever formats are appropriate for your reference list. For example, it can take a batch of references that are in APA style and convert them to Chicago style.
  4. Translating text. LLMs can compete with commercial translation products on “high-resource European languages,” but they perform worse on languages that have less digitized text and fewer digitized translations.
  5. Explaining concepts. LLMs can explain research on a level that both students and researchers trying to learn something new can understand. (Sometimes, though, they confuses fundamental theorems with each other.)


  1. Writing code. LLMs are very good at standard programming tasks, data manipulation, repetitive tasks, and plotting graphs.
  2. Explaining code. LLMs can look at code and explain what the code does in plain language.
  3. Translating code. LLMs can translate code from one coding language to another.
  4. Debugging code. LLMs can catch typos or violations of basic syntax in coding.

Data analysis

  1. Extracting data from text. Stock prices from news articles or dosage information from drug databases can be extracted by an LLM and put into any kind of format that an economist needs.
  2. Reformatting data. LLMs can reformat data so that economists can use it or present it in different ways.
  3. Classifying and scoring text. An LLM can look at a task list from the US Department of Labor’s Occupational Information Network database, for example, and determine how easy or hard it would be to automate.
  4. Extracting sentiment. The LLM can take statements from the Federal Open Market Committee and comment on whether or not they’re hawkish or dovish.
  5. Simulating human subjects. Because LLMs are trained on large amounts of information about humanity, they may also be able to predict what kinds of policies people might respond positively or negatively to based on their demographics. There’s a risk, however, that the LLM produces the results based on false stereotypes.

Mathematical derivatives

Korinek also tested whether LLMs could set up mathematical models of the relationships in the economy that they study. They’re not able to be abstract enough to produce a theoretical result from a mathematical model, Korinek added, but in the medium term LLMs may see more progress in this area.