Player Live
AO VIVO
30 de abril de 2026
Amazon’s OpenAI gambit signals a new phase in the cloud wars — one where exclusivity no longer applies

Amazon’s OpenAI gambit signals a new phase in the cloud wars — one where exclusivity no longer applies

Amazon Web Services on Tuesday launched one of the most consequential enterprise AI plays in the company's 20-year history, simultaneously bringing OpenAI's most powerful models to its Bedrock platform, unveiling a new agentic developer framework, releasing a desktop AI productivity tool called Amazon Quick, and expanding its Amazon Connect service from a single contact-center product into a family of four agentic AI solutions targeting supply chains, hiring, healthcare, and customer experience. The announcements, made at a live event in San Francisco titled "What's Next with AWS," landed just 24 hours after OpenAI and Microsoft publicly restructured their exclusive cloud partnership — a move that, for the first time, freed OpenAI to distribute all of its products across rival cloud providers. AWS CEO Matt Garman called it "a huge partnership" and said customers have been asking for OpenAI models inside AWS "from the very early days." The timing was no accident. Amazon CEO Andy Jassy had flagged the Microsoft-OpenAI restructuring as "very interesting" in a post on X the day prior, promising more details on Tuesday. What followed was a sweeping set of launches that together represent AWS's bid to become the definitive infrastructure layer for the agentic AI era — one where intelligent software agents don't just answer questions but take autonomous action inside enterprise workflows. OpenAI's most capable models arrive on Amazon Bedrock for the first time, reshaping the cloud AI marketplace The centerpiece announcement: OpenAI's latest models are now available through Amazon Bedrock in limited preview, with general availability expected within weeks. AWS confirmed that GPT-5.4 is available immediately in limited preview, with GPT-5.5 arriving shortly thereafter. In an exclusive interview with VentureBeat at the event, Anthony Liguori, Vice President and Distinguished Engineer at AWS, described the significance of the moment. "We announced a partnership about eight weeks ago centered around this idea of the stateful runtime environment, the SRE APIs," Liguori said. "However, today we announced the availability of all of OpenAI's frontier models in Amazon Bedrock available via both the stateless APIs — these are the APIs that are commonly used, like chat completions and responses." Liguori characterized the stateless API availability as particularly critical because it removes migration friction. "Customers can take their existing workloads today and just start using AWS right off the bat," he said. "They don't have to write any new software, develop any new things. I think that's one of the most exciting announcements that came out today." The integration means AWS customers can now evaluate and deploy OpenAI models alongside offerings from Anthropic, Meta, Mistral, Cohere, and Amazon's own models — all through Bedrock's unified security, governance, and cost controls. For enterprise procurement teams, this collapses what had been a fragmented multi-vendor landscape into a single pane of glass. How a $50 billion Amazon investment and a messy Microsoft breakup cleared the way for Tuesday's deal The path to Tuesday's announcement was anything but smooth. As TechCrunch reported, OpenAI's earlier $50 billion deal with Amazon, announced in February, had created a legal tangle with Microsoft. Under the original Microsoft-OpenAI agreement, Microsoft retained exclusive rights to OpenAI products accessed through APIs, which appeared to conflict directly with OpenAI's promise to give AWS exclusive hosting rights for its new Frontier agent-building tool. Microsoft had publicly pushed back at the time, stating that "Azure remains the exclusive cloud provider of stateless OpenAI APIs." The Financial Times reported that Microsoft even contemplated legal action. Monday's restructured deal — which replaced Microsoft's open-ended exclusivity with a nonexclusive license running through 2032 — swept those legal obstacles aside. For AWS, the resolution means its multi-billion-dollar investment in OpenAI can now fully bear fruit. As CNBC reported, OpenAI's revenue chief Denise Dresser had told employees in a memo that the Microsoft relationship "has also limited our ability to meet enterprises where they are — for many that's Bedrock." At the San Francisco event, Dresser framed the moment as a turning point. "They're no longer in the mindset of experimentation and pilots," she said of enterprise customers. "They really want to go full enterprise wide, and they understand that to do that, they need to have powerful models. But even more importantly, they want those models in a trusted environment." OpenAI CEO Sam Altman, who was unable to attend in person due to his ongoing court case against Elon Musk across the Bay Bridge in Oakland, sent a recorded video message. "We are co-developing an agent platform from the ground up, deeply integrated with AWS services and powered by OpenAI's most advanced models and tools," Altman said, "so that customers can build and run powerful agents in their own environment without worrying about the underlying plumbing." Inside Bedrock managed agents, the reinforcement learning-trained 'harness' that AWS says will define the agentic era Beyond raw model access, AWS launched Amazon Bedrock Managed Agents powered by OpenAI — a system that combines OpenAI's frontier models with its proprietary "harness," the agentic execution framework that powers products like Codex. This is where Liguori's technical analysis was most revealing. He explained that the harness concept represents a shift in how models are trained and deployed for agentic work. "When you think about an agentic platform, there's really two components," Liguori told VentureBeat. "One is the harness — the actual logic that will execute tool calls for the model, determine when to compact the context, all of those sorts of things — and then the model itself." Critically, Liguori argued, the best agentic performance comes when models are trained specifically against their harness through reinforcement learning — not merely prompted to use tools at inference time. "You can give a model a whole lot of instructions and a set of tools, and it will be able to use it most of the time," he said. "But when you really train the model on a specific set of tools, a specific style of operations, it's just like drilling plays over and over again — the model builds muscle memory for using that harness."

Leia Mais »