
Bruno Spada / Câmara dos Deputados Deputado Eduardo Velloso, autor do projeto de lei O

Renato Araújo/Câmara dos Deputados Félix Mendonça Júnior, autor da proposta O Projeto de Lei 956/26

Kayo Magalhães / Câmara dos Deputados Simone Andrade: mães atípicas não podem ser romantizadas, mas

Vinicius Loures / Câmara dos Deputados Deputado Pastor Sargento Isidório, relator do projeto A Comissão

O primeiro trimestre do ano ficou para trás. Março, um mês sem nenhum feriado nacional,

A empreendedora Vanessa Leitte conta que começou a trabalhar com doces e chocolates por hobby,




The baton of open source AI models has been passed on between several companies over the years since ChatGPT debuted in late 2022, from Meta with its Llama family to Chinese labs like Qwen and z.ai. But lately, Chinese companies have started pivoting back towards proprietary models even as some U.S. labs like Cursor and Nvidia release their own variants of the Chinese models, leaving a question mark about who will originate this branch of technology going forward. One answer: Arcee, a San Francisco based lab, which this week released AI Trinity-Large-Thinking—a 399-billion parameter text-only reasoning model released under the uncompromisingly open Apache 2.0 license, allowing for full customizability and commercial usage by anyone from indie developers to large enterprises. The release represents more than just a new set of weights on AI code sharing community Hugging Face; it is a strategic bet that "American Open Weights" can provide a sovereign alternative to the increasingly closed or restricted frontier models of 2025. This move arrives precisely as enterprises express growing discomfort with relying on Chinese-based architectures for critical infrastructure, creating a demand for a domestic champion that Arcee intends to fill. As Clément Delangue, co-founder and CEO of Hugging Face, told VentureBeat in a direct message on X: "The strength of the US has always been its startups so maybe they're the ones we should count on to lead in open-source AI. Arcee shows that it's possible!" Genesis of a 30-person frontier lab To understand the weight of the Trinity release, one must understand the lab that built it. Based in San Francisco, Arcee AI is a lean team of only 30 people. While competitors like OpenAI and Google operate with thousands of engineers and multibillion-dollar compute budgets, Arcee has defined itself through what CTO Lucas Atkins calls "engineering through constraint". The company first made waves in 2024 after securing a $24 million Series A led by Emergence Capital, bringing its total capital to just under $50 million. In early 2026, the team took a massive risk: they committed $20 million—nearly half their total funding—to a single 33-day training run for Trinity Large. Utilizing a cluster of 2048 NVIDIA B300 Blackwell GPUs, which provided twice the speed of the previous Hopper generation, Arcee bet the company's future on the belief that developers needed a frontier model they could truly own. This "back the company" bet was a masterclass in capital efficiency, proving that a small, focused team could stand up a full pipeline and stabilize training without endless reserves. Engineering through extreme architectural constraint Trinity-Large-Thinking is noteworthy for the extreme sparsity of its attention mechanism. While the model houses 400 billion total parameters, its Mixture-of-Experts architecture means that only 1.56%, or 13 billion parameters, are active for any given token. This allows the model to possess the deep knowledge of a massive system while maintaining the inference speed and operational efficiency of a much smaller one—performing roughly 2 to 3 times faster than its peers on the same hardware. Training such a sparse model presented significant stability challenges. To prevent a few experts from becoming "winners" while others remained untrained "dead weight," Arcee developed SMEBU, or Soft-clamped Momentum Expert Bias Updates. This mechanism ensures that experts are specialized and routed evenly across a general web corpus. The architecture also incorporates a hybrid approach, alternating local and global sliding window attention layers in a 3:1 ratio to maintain performance in long-context scenarios. The data curriculum and synthetic reasoning Arcee’s partnership with fellow startup DatologyAI provided a curriculum of over 10 trillion curated tokens. However, the training corpus for the full-scale model was expanded to 20 trillion tokens, split evenly between curated web data and high-quality synthetic data. Unlike typical imitation-based synthetic data where a smaller model simply learns to mimic a larger one, DatologyAI utilized techniques to synthetically rewrite raw web text—such as Wikipedia articles or blogs—to condense the information. This process helped the model learn to reason over concepts and information rather than merely memorizing exact token strings. To ensure regulatory compliance, tremendous effort was invested in excluding copyrighted books and materials with unclear licensing, attracting enterprise customers who are wary of intellectual property risks associated with mainstream LLMs. This data-first approach allowed the model to scale cleanly while significantly improving performance on complex tasks like mathematics and multi-step agent tool use. The pivot from yappy chatbots to reasoning agents The defining feature of this official release is the transition from a standard "instruct" model to a "reasoning" model. By implementing a "thinking" phase prior to generating a response—similar to the internal loops found in the earlier Trinity-Mini—Arcee has addressed the primary criticism of its January "Preview" release. Early users of the Preview model had noted that it sometimes struggled with multi-step instructions in complex environments and could be "underwhelming" for agentic tasks. The "Thinking" update effectively bridges this gap, enabling what Arcee calls "long-horizon agents" that can maintain coherence across multi-turn tool calls without getting "sloppy". This reasoning process enables better context coherence and cleaner instruction following under constraint. This has direct implications for Maestro Reasoning, a 32B-parameter derivative of Trinity already being used in audit-focused industries to provide transparent "thought-to-answer" traces. The goal was to move beyond "yappy" or inefficient chatbots toward reliable, cheap, high-quality agents that stay stable across long-running loops. Geopolitics and the case for American open weights The significance of Arcee’s Apache 2.0 commitment is amplified by the retreat of its primary competitors from the open-weight frontier. Throughout 2025, Chinese research labs like Alibaba's Qwen and z.ai (aka Zhupai) set the pace for high-efficiency MoE architectures. However, as we enter 2026, those labs have begun to shift toward proprietary enterprise platforms and specialized subscriptions, signaling a move away from pure community growth. The fragmentation of these once-prolific teams, such as the departure of key technical leads from Alibaba's Qwen lab, has left a void at the high end of the open-weight market. In the United States, the movement has faced its own crisis. Meta’s

À medida que a IA generativa amadurece de uma novidade para um elemento básico no local de trabalho, surge um novo ponto de

Uma marca global de beleza transfere uma parte significativa do seu orçamento de mídia para influenciadores e canais sociais para acelerar o crescimento

O CEO da CrowdStrike, George Kurtz, destacou em sua palestra na RSA Conference 2026 que o tempo de fuga do adversário mais rápido

Provedores de serviços ficaram com a maior parte das vendas, que tiveram como destino o Brasil e, também, vários países da América A

Ouça o artigo 5 minutos Este áudio é gerado automaticamente. Por favor, deixe-nos saber se você tiver comentários. Embora se espere que os

For three decades, the web has existed in a state of architectural denial. It is a platform originally conceived to share static physics

Na semana passada, um de nossos gerentes de produto (PMs) criou e enviou um recurso. Não especificei isso. Não arquivei um ticket para

Muitas pessoas experimentaram ferramentas de IA e não ficaram impressionadas. Entendi – muitas demos prometem mágica, mas, na prática, os resultados podem parecer

Presença do cantor e homenagem exclusiva marcam celebração sofisticada da aniversariante a grande atração da noite na última sexta (26) foi a presença

Em um contexto global marcado por desafios institucionais e limitações na efetivação de direitos fundamentais, cresce a importância dos sistemas regionais de direitos humanos como

Numa live,as 19h47 do dia 31 de março e evento especial a jornalista Mariana Rodrigues que criou o Instagram @campobelosdicas, lançou a evolução
Uma entre as diversas vantagens em ser assinante do Prime é a possibilidade de resgatar mensalmente uma série de jogos gratuitos por meio

Bruno Spada / Câmara dos Deputados Deputado Eduardo Velloso, autor do projeto de lei O Projeto de Lei 71/26 proíbe instituições financeiras de

Congresso RS – evento que reúne empresários em torno de execução, liderança e tomada de decisão no ambiente real de negócios.

Antes da conversa, vem a busca. E é nela que a decisão começa.

Em meio a uma das maiores comunidades de São Paulo, onde desafios sociais convivem diariamente com histórias de resistência e superação, uma iniciativa

Danilo Campagnollo Bueno é advogado criminalista desde 2006, com atuação voltada especialmente ao Direito Penal Econômico. É mestre pela FGV-SP (2023), realizou o

Por Vicente Cárdenas, Líder de Etiquetagem na Origem para a América Latina da Sensormatic Solutions Em um mercado dinâmico como o do varejo

Lançado em São Paulo no mês de outubro e na Bahia em dezembro, o movimento expande para Santa Catarina, com o sócio Márcio

No Poder & Negócios, Myrinha Vasconcellos expõe o que muitas empresas ainda evitam encarar: experiência do cliente não começa no atendimento, começa na
© 2025 Todos os direitos reservados a Handelsblatt