This AI Research Proposes a Framework to Model User Interactions with LLMs using Norman’s Seven Stages of Action

Intelligent writing aids have been extensively investigated for many different writing objectives and activities. The focus of recent advancements in writing helpers has been Large Language Models (LLMs), which enable individuals to produce material in response to a prompt by providing their purpose. Important developments in LLMs like ChatGPT and its use in common products highlight their potential as writing helpers. However, the human-computer interface with these assistants reveals significant usability issues, including coherence and fluency of the model output, trustworthiness, ownership of the created material, and predictability of model performance. 

While some of the interactional components of writing assistants have been studied in earlier publications, there has yet to be a focused attempt to satisfy end-to-end writing goals and approach their interactions from a usability perspective. These problems frequently lead to users needing help to utilize the tools successfully to accomplish their writing goals and occasionally lead to users giving up completely. Researchers from McGill University and Université de Montréal examine the interface design of LLM-supported intelligent writing assistants, emphasizing human activities and drawing influence from previous research and design literature. They also suggest using Norman’s seven action phases as a design paradigm for LLM-supported intelligent writing helpers and analyzing the usability implications. 

Figure 1:Illustrates Norman’s Seven Stages of Action as they are explored in relation to interactions with the LLM.

A cyclical cognitive model known as Norman’s seven phases of action is frequently used to understand users’ thought processes and associated physical activities. It is primarily used to inform system interface design. The seven steps of action are (a) goal development, (b) plan, (c) specify, (d) perform, (e) perceive, (f) interpret, and (g) compare, as shown in Figure 1. Plan, specify, and execute phases make up the interaction’s execution phase, and perceive, interpret, and compare phases make up the evaluation phase. The user’s interactions are based on a mental model of the system they developed from past assumptions. They assert that this paradigm enables the creation and assessment of interfaces that facilitate fine-grained interactions with LLMs at various phases. 

They suggest that efficient LLM-based writing assistance must answer the questions relevant to the various stages to inform the design and give the user the essential skills. They provide an example that was heavily influenced by their initial effort to use OpenAI’s Codex to write software tutorials to clarify their point further. In a typical interaction, the user begins by deciding on a primary objective, such as creating a lesson on how to use matplotlib to plot data points. They then break down the aim into manageable components to help them determine how to approach the writing helper. 

The main objective, for instance, may be broken down into three subgoals:

  • Authoring tutorial sections
  • Providing suitable instructions for library installation in various contexts
  • Producing and explaining code snippets
  • Increasing the tutorial’s readability

Even though it has a narrower scope and can come after several cycles of the action framework, each step in this situation can also be considered a sub-goal. When customers ask the writing assistant for help, they often describe and then complete their request via the interface, for example, “Write a code snippet to plot a scatter plot using matplotlib given the data points in a Python list and provide an explanation of the code.” 

The performing stage can include various interface capabilities to change and update the prompts, while the specific stage may have systems to recommend alternative prompts to the model. The execution stage is influenced by the users’ prior conceptual models, their job and domain expertise, and both. When the writing assistant produces an output, the user reads, understands, and adjusts their preexisting mental models following their knowledge and skill. For instance, a user with substantial experience with matplotlib could be better able to detect any unexpected material or mistakes in the resulting code. Additionally, it could be required to run any existing unit tests or execute the produced code snippet in an IDE to compare the results with resources in other contexts. 

They contend that applying Norman’s seven stages of action as a paradigm to investigate user behavior with LLM-based writing aids can offer a useful foundation for realizing and designing fine-grained interactions throughout the phases of goal formulation, execution, and assessment. It is possible to pinpoint the important interactions and direct the design of a writing assistant to aid with the work of creating tutorials by posing questions pertinent to each step. It is possible to solve particular usability issues in the design of LLM-based writing tools by analyzing the devices and their features across the interaction design dimensions outlined by the framework. More ambitiously, they point to understudied study areas in human-LLM interactions, such as aligning with user preferences, designing effective prompts, and the explainability and interpretability of model outputs.

Check out the Paper. Don’t forget to join our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at

🚀 Check Out 100’s AI Tools in AI Tools Club

Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.

🚀 [FREE AI WEBINAR] 'Optimise Your Custom Embedding Space: How to find the right embedding model for YOUR data.' (July 18, 2024) [Promoted]