Cognitive Automation and LLMs in Economic Research: 25 Use-Cases for LLMs Accelerating Research Across 6 Domains

Research in economics and other fields could be revolutionized by large language models (LLMs), such as ChatGPT. In six areas—ideation, writing, background research, data analysis, coding, and mathematical derivations where LLMs are beginning to be valuable as researchers from National Bureau of Economic Research offer 25 use scenarios. The research group categorize the LLMs capabilities from experimental to very helpful and provide broad guidelines and particular examples for utilizing each. According to academics, the performance of LLMs will continue to improve in all of these areas, who also predict that economists who use LLMs to automate microtasks will become much more productive. The researchers also speculate upon the longer-term effects of cognitive automation via LLMs on economic research. Let’s quickly review the study.

LLMs are a subset of foundation models representing the new paradigm in artificial intelligence for the twenty-first century. Large deep learning models known as “foundation models” have parameter counts starting at ten and increasing. They receive extensive pre-training on a wealth of data to establish a foundation that may later be fine-tuned for various applications. Massive amounts of computing power and data are used to pre-train foundation models using a technique known as self-supervised learning, which teaches the model the structure of the training data by repeatedly predicting portions of it that are ignored.

The scale that foundation models and, consequently, LLMs enjoy over earlier generations of deep learning models sets them apart from those models. The deep learning models of the majority of the 2010s showed potent abilities in some applications, such as image recognition. Still, there remained a significant difference between the wide-ranging human capabilities and the focused AI systems. With the most recent generation of LLMs exhibiting an expanding range of capabilities, that distinction is beginning to become less distinct.

The following are some of the keys uses for LLM, according to the researcher:

  • Ideation, or the act of generating, choosing, and developing ideas, is one aspect of research for which LLMs are becoming more and more valuable. They can serve as tutors as well as assistants. This demonstrates how LLMs vary from earlier deep learning applications in economics since they exhibit inventiveness previously only found in people. Modern LLMs have great ideation skills, but they also have significant restrictions.
  • LLMs are very helpful in brainstorming (or perhaps more appropriately, net-storming) ideas and examples connected to a specific issue because they are trained on massive data representing a cross-section of all human knowledge.
  • LLMs are equally adept at presenting arguments in support of a particular point as they are at raising counterarguments, regardless of whose side of an argument they are on. This might help to combat the confirmation bias that is present in all human brains.
  • Text generation is an LLM’s primary skill. This suggests that they are extremely capable and helpful for various writing-related tasks, such as creating content from scratch using bullet points, altering the text’s style, editing it, assessing types, and creating titles, headlines, and tweets.
  • The ability to transform sloppy bullet points into well-structured, understandable sentences may be one of LLMs’ most useful skills.
  • Editing is another useful ability. LLMs can edit text for grammar, spelling, style, simplicity, or clarity. Those who want to enhance their writing but are non-native speakers may find this collection of skills particularly helpful.
  • LLMs can assess a text’s style, clarity, or other factors.
  • LLMs are only beneficial for literature searches. Although they typically have access to standard references commonly mentioned in the literature, double-checking any references they offer is still a good idea. They may hallucinate and create papers that appear authoritative but are not when requested for consideration.
  • LLMs can serve as tutors and explain various basic economic topics in a way that is both helpful to students trying to master new material and even to more seasoned academics venturing outside of their field of study.
  • LLMs can help with both small-task assistance and code tutoring. They can create, change, translate, or debug code snippets using plain English instructions (or other common languages). LLMs can format data, extract data from plain text, classify and score text, extract sentiment, and even simulate human test subjects. In addition, they can act as tutors when using new libraries, functions, or even programming languages that the user is not very familiar with by quickly generating output that shows what libraries and functions are needed for specific types of operations or what syntactic structures to use in a given programming language. Additionally, perhaps most helpful is that these features may be accessed through an API and a web interface, as demonstrated in the samples below. LLMs are showing signs of being able to execute mathematical derivations. LLMs can extract sentiment from the text as well.

LLMs have developed into helpful research tools for various tasks, including brainstorming, writing, background research, data analysis, coding, and mathematical derivations. Soon, cognitive automation made possible by LLMs will significantly increase research productivity. As technology develops, it will be assumed that LLMs are getting better and better at what they do and need less and less human input, editing, and feedback. We might end up merely endorsing the work of LLMs as they become more sophisticated.


Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 14k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Do You Know Marktechpost has 1.8 Million+ Pageviews per month and 500,000 AI Community members?
Want to support us? Become Our Sponsor

Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone's life easy.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...