If you’re a developer interested in artificial intelligence (A.I.), you’ve probably heard of GPT-3, the natural-language processor that attempts to replicate human writing. By leveraging a huge database of human-produced text, GPT-3 can write in a way that’s remarkably human-like, at least in short snippets. Now it seems like more companies are trying to incorporate the technology into their developer-centric products.
For example, Microsoft used its most recent BUILD developer conference to announce the incorporation of several A.I. models built by OpenAI, including GPT-3, into its Azure cloud platform. The package also features Codex, which does its best to translate natural language into code—imagine a future where even non-techie employees can write out a series of commands and have the system turn those into an app or service.
Microsoft has made it clear that it wants customers to use these new A.I. tools for processes such as summarizing customer sentiment. “Azure OpenAI Service is enabling customers across industries from health care to financial services to manufacturing to quickly perform an array of tasks. Innovations include generating unique content for customers, summarizing and classifying customer feedback, and extracting text from medical records to streamline billing,” read a note on the company’s corporate blog. “The most common uses have been writing assistance, translating natural language to code and gaining data insights through search, entity extraction, sentiment and classification.”
Microsoft also seems to recognize the dangers inherent in letting an A.I. take over a process without any human oversight. “Microsoft has made significant investments to help guard against abuse and unintended harm, which includes requiring applicants to show well-defined use cases and incorporate Microsoft’s principles for responsible AI use,” the blog added. “One important way CarMax [a client] and other customers meet the criteria is by having humans in the loop to make sure model outputs are accurate and up to content standards before they’re published.”
This is a necessary feature, because as anyone who’s ever tried to oversee a platform’s commenting or feedback system can tell you, humans are very, very good at getting around the systems designed to block trolls and improper commenting. In theory, having humans in the loop will allow companies to tweak their models and workflows to make their A.I. deployment more efficient.
Although GPT-3 is the latest generation of a system that involves massive datasets and sophisticated algorithms, it still has a long way to go. It’s very capable of writing text that sounds human, but the system doesn’t understand underlying context (or even how the world works); for the foreseeable future, humans will need to review GPT-3 content to make sure it’s not spitting out eccentric Mad Libs.
It’s a similar situation with any A.I. platform that utilizes text in some way. Many of these systems hinge heavily on analyzing the frequency and placement of certain keywords, while missing many of the nuances and quirks of human speech—Google’s new, automated job-interview tool, for instance, is very good at telling you whether your answers to hypothetical job-interview questions contain the right words, but it can’t tell you whether you’re providing the right level of detail and context for a human hiring manager.
But A.I. is evolving rapidly, and chances are good that all these tools will only become more sophisticated in coming years. While there isn’t a huge amount of A.I.-related jobs at the moment (at least in comparison to, say, the number of software developer jobs), you can expect the number to radically increase by the end of the decade. Even if you’re not that interested in A.I., you’ll probably end up using A.I.-infused tools sooner rather than later.