This AI Research Presents a Large Language Model that can Answer Philosophical Questions in the Voice of a Specific Philosopher

Since artificial intelligence has exploded in recent years, models can now outperform humans in a wide range of tasks, from playing games like chess and go to performing practical tasks like predicting protein structures and performing matrix multiplication calculations. Large language models have significantly benefited from recent technological improvements that have led to the creation of sophisticated information and dialogue systems. One of the best instances of how language models function astoundingly well when composing documents, conversing with humans, and answering questions is ChatGPT.

Another recent research field that has intrigued academics is determining whether AI can likewise write innovative and clever philosophical essays. Expert-level professional philosophy is currently thought to require a form of competence and knowledge that existing AI models still need. It would be fascinating to discover if large language models could be taught to write philosophical texts that were virtually indistinguishable from those written by actual philosophers. To address this problem statement, researchers from the University of California-Riverside, École Normale Supérieure (ECN) in Paris, and Ludwig-Maximilians-Universität München created a large language model that can respond to philosophical queries in a manner that is very similar to that of a particular philosopher. The group fine-tuned OpenAI’s GPT-3 language model using the work of philosopher Daniel C. Dennett. It was concluded that the model could produce responses that closely mirror human philosophers’ answers.

GPT-3, or the third-generation Generative Pre-Trained Transformer, is an autoregressive language model that uses deep learning to generate texts. The model’s foundation lies in using sophisticated and robust statistical algorithms on the input text prompt to predict the following word in a sentence. To do this, the language model analyses a massive corpus of text to predict the next word in a sentence by looking at its previous context. The GPT-3 model was fine-tuned by the researchers based on Dennett’s earlier writings to ensure it gave the philosopher’s typical word usage patterns more weight when predicting the following word in a sentence than other word patterns.

The researchers wanted to evaluate their fine-tuned model by asking it questions and examining if its answers were something the actual philosopher could have given. The researchers collected four responses for each question without cherry-picking, that is, without necessarily choosing the best results, by asking Dennett ten philosophical questions and then posing the same questions to their language model. They then asked 425 human users if they could tell the difference between responses to philosophical queries given by Dennett and those created by the machine. It was astounding to discover that expert philosophers and readers of philosophy blogs could correctly identify Dennett’s responses roughly 50% of the time. In contrast, average participants with little to no philosophical background did so only 20% of the time. These findings imply that a GPT-3 model that has been fine-tuned can be surprisingly close to speaking in the voice of a certain philosopher.

Even though the language model delivered impressive results, there is still an opportunity for improvement. The team intends to develop their model further and apply it to more real-world scenarios in the future. Also, they are investigating the potential for making it into a tool that would be very useful to philosophers and historians.

Check out the Paper and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 14k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...