
Kayo Magalhães / Câmara dos Deputados Deputado Paulo Abi-Ackel, relator do projeto na CCJ A

Bruno Spada/Câmara dos Deputados Gilson Marques é o relator da proposta A Comissão de Defesa

Mario Agra / Câmara dos Deputados Fernando Monteiro recomendou a aprovação do projeto, com mudanças

Pablo Valadares / Câmara dos Deputados Laura Carneiro: medida fecha lacunas na punição de práticas

Você já enviou um e-mail em inglês absolutamente convicto do que escreveu, e só depois

As atividades serão realizadas em municípios como Araras, Brotas, Cordeirópolis, Dourado, Itirapina, Ribeirão Bonito, Rio

Anthropic on Tuesday unveiled a suite of updates to its Claude Managed Agents platform at its second annual Code with Claude developer conference in San Francisco, introducing a new capability called "dreaming" that lets AI agents learn from their own past sessions and improve over time — a step toward the kind of self-correcting, self-improving AI systems that enterprises have demanded before trusting agents with production workloads. The company also moved two previously experimental features — outcomes and multi-agent orchestration — from research preview into public beta, making them broadly available to developers building on the Claude platform. Together, the three features address what Anthropic says are the hardest problems in running AI agents at scale: keeping them accurate, helping them learn, and preventing them from becoming bottlenecks on complex, multi-step work. Early adopters are already reporting significant results. Legal AI company Harvey saw task completion rates increase roughly 6x after implementing dreaming. Medical document review company Wisedocs cut its document review time by 50% using outcomes. And Netflix is now processing logs from hundreds of builds simultaneously using multi-agent orchestration. The announcements come at a moment of extraordinary momentum for Anthropic. CEO Dario Amodei disclosed during a fireside chat at the conference that the company's growth has outpaced even its own aggressive internal projections. In the first quarter of 2026, Anthropic saw what Amodei described as 80x annualized growth in revenue and usage — far exceeding the 10x annual growth the company had planned for. API volume on the Claude platform is up nearly 70x year over year, and the average developer using Claude Code now spends 20 hours per week working with the tool. "We tried to plan very well for a world of 10x growth per year," Amodei said. "And yet we saw 80x. And so that is the reason we have had difficulties with compute." How Anthropic's dreaming feature teaches AI agents to learn from their own history Dreaming is the most novel of the three features and the one Anthropic is most eager to distinguish from conventional memory systems. While the company launched agent memory earlier this year — allowing Claude to retain preferences and context within and across individual sessions — dreaming works at a higher level of abstraction. It is a scheduled process that reviews an agent's past sessions and memory stores, extracts patterns across them, and curates those memories so agents improve over time. It surfaces insights that no single agent session could see on its own: recurring mistakes, workflows that multiple agents converge on independently, and preferences shared across a team of agents. Alex Albert, who leads research product management at Anthropic, explained the concept in an interview at the conference. He described dreaming as analogous to how people within organizations create skills after working through a task. "They might do a workflow with Claude, and at the end of that workflow, after they've iterated and zigzagged a little bit, they want to record that path from A to B," Albert said. "A very similar thing is happening with dreaming — instead of you manually creating the skill from your experience working with Claude, the model is doing it, so it has that same context for a future session." Crucially, dreaming does not modify the underlying model weights. "We're not changing the model itself through dreaming — it's not doing updates to the weights or anything like that," Albert said. Instead, the agent writes learnings as plain-text notes and structured "playbooks" that future sessions can reference, making the entire process observable and auditable by humans. When asked about the trust implications of agents consolidating their own knowledge, Albert acknowledged that "there is a level of trust that you need to place" but noted that all memories are inspectable and that smarter models are getting progressively better at managing this process. "They're learning to write better notes for their future self," he said. A live demo showed AI agents improving overnight without human guidance During the keynote, the Anthropic team demonstrated all three features live on stage using a fictional aerospace startup called "Lumara" that needed to autonomously land drones on the moon for resource mining. The team configured a multi-agent system with three specialists — a commander agent responsible for overall mission success, a detector agent that identified high-quality landing sites, and a navigator agent that handled safe drone flight and landing — and defined a success rubric requiring soft landings, clear ground, and enough fuel reserves for a return trip to Earth. An initial simulation across six hypothetical landing sites produced strong but imperfect results. To improve, the presenters triggered a dreaming session directly from the Claude Developer Console. Overnight, the dreaming agent reviewed all past simulation sessions and wrote a detailed descent playbook — a comprehensive set of heuristics drawn from patterns across multiple mission runs. When the team ran a new simulation the following morning with the dreaming-derived playbook in memory, the results improved meaningfully on the sites that had previously underperformed. "All we had to do was just have Caitlin press a button," said Angela Jiang, Head of Product for the Claude Platform, referring to her colleague on stage. "All dreaming." The demo illustrated how the three features compose together in practice. Multi-agent orchestration split the complex task across specialists with independent context windows. Outcomes provided the rubric against which a separate grader agent evaluated each run. And dreaming extracted lessons across those runs to improve future performance — forming what Anthropic describes as a continuous improvement loop that requires no human intervention between iterations. Why Anthropic built a separate 'grader' agent to check Claude's own work The outcomes feature, now in public beta, gives developers a way to define what success looks like using a rubric — a structural framework, a presentation standard, a brand voice, or any other set of criteria — and then lets the agent iterate toward that standard autonomously. What makes outcomes architecturally distinctive is its separation of concerns. When an

A Lumu, empresa de cibersegurança criadora do Continuous Compromise Assessment®, anuncia um marco na evolução da defesa autônoma. O Lumu Autopilot, principal solução

O esperado início do corte da taxa básica de juros parece ainda não ter surtido efeito no bolso dos brasileiros. De acordo com

Num mundo onde um vídeo viral do TikTok pode fazer com que uma marca se torne uma tendência global em poucas horas, o

A little-known Miami-based startup called Subquadratic emerged from stealth on Tuesday with a sweeping claim: that it has built the first large language

A American Express (Amex) está construindo um sistema que permite aos agentes de IA comprar e pagar em nome dos usuários – mas

A camada de andaime que os desenvolvedores antes precisavam para enviar aplicativos LLM – camadas de indexação, mecanismos de consulta, pipelines de recuperação,

Anthropic created the Model Context Protocol as the open standard for AI agent-to-tool communication. OpenAI adopted it in March 2025. Google DeepMind followed.

Ouça o artigo 3 minutos Este áudio é gerado automaticamente. Por favor, deixe-nos saber se você tiver comentários. Resumo de mergulho: A Coors

Federico Valverde foi ao hospital com um ferimento na cabeça depois de uma briga no vestiário com Aurelien Tchouameni, companheiro de equipe do
A Microsoft está testando um novo recurso capaz de tornar a inicialização de programas consideravelmente mais rápida no Windows 11, acabando com as

Obra lançada durante eventos da ONU propõe reflexão sobre Wollying, violência emocional silenciosa que afeta autoestima, relações e dignidade feminina.

Márcio Giacobelli defende que governança corporativa só gera resultado quando existe capacidade real de execução, liderança madura e alinhamento operacional.

Luiz Gênova, CEO da APet, defende estratégia, execução disciplinada e tecnologia como pilares fundamentais para empresas que buscam crescimento sustentável e vantagem competitiva.

Kayo Magalhães / Câmara dos Deputados Deputado Paulo Abi-Ackel, relator do projeto na CCJ A Comissão de Constituição e Justiça e de Cidadania

Lisboa, Portugal – A festa Voyage C’est La Vie reafirmou sua força na cena eletrônica ao realizar a 11ª edição em grande estilo

O Brasil passou a adotar regras mais rigorosas no combate a crimes patrimoniais e fraudes eletrônicas. A sanção da Lei nº 15.397/2026 pelo

Com curadoria que vai do kit de skincare ao eletrodoméstico de beleza, plataforma facilita a busca pelo presente com até 60% de desconto

Encontro chega à 17ª edição com proposta de unir networking qualificado, bem-estar e desenvolvimento emocional feminino São Paulo recebe, no próximo dia 14

Iniciativa digital integra dados científicos, compara terapias e aposta em protocolos internacionais para tornar informação em saúde mais acessível e estruturada O volume

SÃO PAULO – Em um cenário onde os conflitos familiares e patrimoniais vêm se intensificando no Brasil, profissionais especializados têm assumido papel estratégico na
© 2025 Todos os direitos reservados a Handelsblatt