Can AI Outperform Humans at Creative Thinking Task? This Study Provides Insights into the Relationship Between Human and Machine Learning Creativity

While AI has made tremendous progress and has become a valuable tool in many domains, it is not a replacement for humans’ unique qualities and capabilities. The most effective approach, in many cases, involves humans working alongside AI, leveraging each other’s strengths to achieve the best outcomes. There are fundamental differences between human and artificial intelligence, and there are tasks and domains where human intelligence remains superior.

Humans can think creatively, imagine new concepts, and innovate. AI systems are limited by the data and patterns they’ve been trained on and often struggle with truly novel and creative tasks. However, the question is, can an average human outperform the AI model?

Researchers tried to compare the creativity of humans (n= 256) with that of three current AI chatbots, ChatGPT3.5, ChatGPT4, and Copy.AI, by using the alternate uses task (AUT), which is a divergent thinking task. It is a cognitive method used in psychology and creativity research to assess an individual’s ability to generate creative and novel ideas in response to a specific stimulus. These tasks measure a person’s capacity for divergent thinking, which is the ability to think broadly and generate multiple solutions or ideas from a single problem.

Participants were asked to generate uncommon and creative uses for everyday objects. AUT consisted of four tasks with objects: rope, box, pencil, and candle. The human participants were instructed to provide ideas qualitatively but not depend solely on the quantity. The chatbots were tested 11 times with four object prompts in different sessions. The four objects were tested only once within that session.

They collected subjective creativity or originality ratings from six professionally trained humans to evaluate the results. The order in which the responses within object categories were presented was randomized separately for each rater. The scores of each rater were averaged across all the responses a participant or chatbot in a session gave to an object, and the final subjective scores for each object were formed by averaging the six raters’ scores.

On average, the AI chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. However, the best human ideas still matched or exceeded those of the chatbots. While this study highlights the potential of AI as a tool to enhance creativity, it also underscores the unique and complex nature of human creativity that may be difficult to replicate or surpass with AI technology fully.

However, AI technology is rapidly developing, and the results may be different after half a year. Based on the present study, the clearest weakness in human performance lies in the relatively high proportion of poor-quality ideas, which were absent in chatbot responses. This weakness may be due to normal variations in human performance, including failures in associative and executive processes and motivational factors.


Check out theĀ Paper.Ā All Credit For This Research Goes To the Researchers on This Project. Also,Ā donā€™t forget to joinĀ our 30k+ ML SubReddit,Ā 40k+ Facebook Community,Ā Discord Channel,Ā andĀ Email Newsletter, where we share the latest AI research news, cool AI projects, and more.If you like our work, you will love our newsletter..

Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.

šŸ Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...