Microsoft recently announced the launch of a new service that makes AI models from OpenAI available on Microsoft’s Azure platform. The company specifically mentions GPT-3, its groundbreaking language model capable under certain circumstances of producing text with humanlike accuracy and fluency.
The most well-known example of a new generation of AI language models is GPT-3. These systems generally serve as autocomplete: feed them the text, whether it’s an email or poem; they can finish what you started by interpreting languages with their capacity for summarizing papers assessing sentiment in texts while also generating project ideas which Microsoft claims its Azure OpenAI Service will help users accomplish more easily. Using the OpenAI API in Azure, Microsoft is making it possible for companies of all kinds to deploy GPT-3 legally and securely.
GPT-3 is already being utilized for this type of work through an OpenAI API (there is still a waitlist for it). GPT-3 is being used to power a choose-your-own-adventure text game and chatbots pretending to be fictional TikTok influencers, according to startups like Copy.ai. More exotic applications include GPT-3 to power a choose-your-own-adventure text game and chatbots pretending to be fictional TikTok influencers. While OpenAI will continue to sell its own GPT-3 API to keep clients up to date, Microsoft’s repackaging of the system will target larger enterprises requiring more significant support and security. That implies “access management, private networking, data handling safeguards, and expanding capacity” will be included in their service.
It’s unclear how much this will hurt OpenAI’s business, although the two firms currently work closely together. In 2019, Microsoft made a $1 billion investment in OpenAI and became the company’s exclusive cloud provider (In the compute-intensive realm of AI research, this is a critical relationship). Microsoft then purchased an exclusive license to directly incorporate GPT-3 into its own products in September 2020. So far, these efforts have concentrated on GPT-3’s code-generation capabilities, with Microsoft including autocomplete functions into its PowerApps suite and Visual Studio Code editor.
Given the huge issues associated with massive AI language models like GPT-3, these restricted applications make sense. To begin with, much of what these systems produce is garbage, necessitating human curation and control to separate the good from the bad. Second, these algorithms have been demonstrated to integrate biases discovered in their training data, ranging from sexism to Islamophobia, time and time again. They are more prone to link Muslims with violence and adhere to antiquated gender norms, for example. To put it another way, if anyone mucks about with these models without filtering them, they’ll shortly say something terrible.
Microsoft is fully aware of the dangers of releasing such systems to the broader public. As a result, it is attempting to prevent these issues with GPT-3 by implementing additional protections. Among these are “filtering and monitoring tools to help prevent inappropriate outputs or unintended uses of the service,” “granting access to the tool by invitation only,” “vetting customers’ use cases,” and “monitoring and filtering tools to help prevent inappropriate outputs or unintended uses of the service.” However, it is unclear if these constraints will be sufficient. When The Verge inquired how the business’s screening mechanisms function and whether there was any proof to minimize incorrect GPT-3 outputs, the company skirted the topic.
Microsoft’s GPT-3 filters may have a lot of potential because they are untested. The output from these filter can be double checked by humans, which makes it more beneficial for large language models. However this quality negates some anticipated efficiency improvements in production time and cost effectiveness if Azure OpenAI Service only assists corporate leaders with communication directed towards them. Access to this platform will be invitation-only and for customers who are planning on using AI technology responsibly.