The battle over generative A.I. is heating up, with Amazon pledging to invest $4 billion in Anthropic, an OpenAI competitor. That investment could have massive effects on tech professionals who rely on Amazon’s ecosystem to get things done, including (but certainly not limited to) Amazon Web Services (AWS).
Anthropic will use AWS’s customized Trainium and Inferentia chips to train its models, and AWS will serve as the primary cloud provider for the A.I. company’s model development. “Anthropic makes a long-term commitment to provide AWS customers around the world with access to future generations of its foundation models via Amazon Bedrock, AWS’s fully managed service that provides secure access to the industry’s top foundation models,” read Amazon’s press release on the matter. “In addition, Anthropic will provide AWS customers with early access to unique features for model customization and fine-tuning capabilities.”
Right now, tech pros can use generative A.I. for a number of tasks, from writing and debugging code to composing documentation. Investments like Amazon’s in Anthropic could further democratize use of highly customized learning models, which may open up new arenas for A.I.-generated workflows and products. For example, a company could use a “bespoke model” and its own massive datasets to craft a chatbot that doesn’t violate user privacy or use proprietary data (which are two widespread fears about the current generation of generative A.I. products).
If you’re a team leader, project manager, or executive, you should be thinking about what customized, powerful A.I. can do for your projects and broader business—both good and bad. The first step is to set a solid A.I. policy, which many companies haven’t done; according to a new survey by The Conference Board (based on feedback from 1,100 U.S. employees), only 26 percent of organizations have a general policy related to the use of generative A.I., while 23 percent have a policy under development.
But what actually goes into a solid A.I. policy? Fortunately, the beginning steps are pretty clear:
- Establish how employees (and contractors) can use A.I.
- Set the parameters for effective, ethical A.I. use
- Decide on data privacy and security in the context of A.I.
- Create a system of audits and reviews to ensure everything’s working
An effective A.I. policy could supercharge your A.I.-powered workflow while ensuring processes remain safe and privacy-centric; it could free up your tech pros to pursue interesting and profitable work. That’s way better than no A.I. policy, which could lead to chaos… especially as A.I. tools become more and more powerful.