Chinese language AI startup Zhupai aka z.ai is again this week with an eye-popping new frontier massive language mannequin: GLM-5.
The newest in z.ai's ongoing and regularly spectacular GLM sequence, it retains an open supply MIT License — excellent for enterprise deployment – and, in certainly one of a number of notable achievements, achieves a record-low hallucination price on the unbiased Synthetic Evaluation Intelligence Index v4.0.
With a rating of -1 on the AA-Omniscience Index—representing an enormous 35-point enchancment over its predecessor—GLM-5 now leads your entire AI trade, together with U.S. opponents like Google, OpenAI and Anthropic, in data reliability by realizing when to abstain moderately than fabricate data.
Past its reasoning prowess, GLM-5 is constructed for high-utility data work. It options native "Agent Mode" capabilities that permit it to show uncooked prompts or supply supplies instantly into skilled workplace paperwork, together with ready-to-use .docx, .pdf, and .xlsx information.
Whether or not producing detailed monetary stories, highschool sponsorship proposals, or advanced spreadsheets, GLM-5 delivers leads to real-world codecs that combine instantly into enterprise workflows.
It’s also disruptively priced at roughly $0.80 per million enter tokens and $2.56 per million output tokens, roughly 6x cheaper than proprietary opponents like Claude Opus 4.6, making state-of-the-art agentic engineering more cost effective than ever earlier than. Right here's what else enterprise choice makers ought to know concerning the mannequin and its coaching.
Expertise: scaling for agentic effectivity
On the coronary heart of GLM-5 is an enormous leap in uncooked parameters. The mannequin scales from the 355B parameters of GLM-4.5 to a staggering 744B parameters, with 40B energetic per token in its Combination-of-Specialists (MoE) structure. This progress is supported by a rise in pre-training knowledge to twenty-eight.5T tokens.
To deal with coaching inefficiencies at this magnitude, Zai developed "slime," a novel asynchronous reinforcement studying (RL) infrastructure.
Conventional RL usually suffers from "long-tail" bottlenecks; Slime breaks this lockstep by permitting trajectories to be generated independently, enabling the fine-grained iterations mandatory for advanced agentic conduct.
By integrating system-level optimizations like Energetic Partial Rollouts (APRIL), slime addresses the technology bottlenecks that usually devour over 90% of RL coaching time, considerably accelerating the iteration cycle for advanced agentic duties.
The framework’s design is centered on a tripartite modular system: a high-performance coaching module powered by Megatron-LM, a rollout module using SGLang and customized routers for high-throughput knowledge technology, and a centralized Information Buffer that manages immediate initialization and rollout storage.
By enabling adaptive verifiable environments and multi-turn compilation suggestions loops, slime gives the sturdy, high-throughput basis required to transition AI from easy chat interactions towards rigorous, long-horizon methods engineering.
To maintain deployment manageable, GLM-5 integrates DeepSeek Sparse Consideration (DSA), preserving a 200K context capability whereas drastically lowering prices.
Finish-to-end data work
Zai is framing GLM-5 as an "workplace" software for the AGI period. Whereas earlier fashions targeted on snippets, GLM-5 is constructed to ship ready-to-use paperwork.
It may possibly autonomously rework prompts into formatted .docx, .pdf, and .xlsx information—starting from monetary stories to sponsorship proposals.
In apply, this implies the mannequin can decompose high-level objectives into actionable subtasks and carry out "Agentic Engineering," the place people outline high quality gates whereas the AI handles execution.
Excessive efficiency
GLM-5’s benchmarks make it the brand new strongest open supply mannequin on the earth, in response to Synthetic Evaluation, surpassing Chinese language rival Moonshot's new Kimi K2.5 launched simply two weeks in the past, exhibiting that Chinese language AI corporations are almost caught up with much better resourced proprietary Western rivals.
Based on z.ai's personal supplies shared at this time, GLM-5 ranks close to state-of-the-art on a number of key benchmarks:
SWE-bench Verified: GLM-5 achieved a rating of 77.8, outperforming Gemini 3 Professional (76.2) and approaching Claude Opus 4.6 (80.9).
Merchandising Bench 2: In a simulation of working a enterprise, GLM-5 ranked #1 amongst open-source fashions with a last stability of $4,432.12.
Past efficiency, GLM-5 is aggressively undercutting the market. Dwell on OpenRouter as of February 11, 2026, it’s priced at roughly $0.80–$1.00 per million enter tokens and $2.56–$3.20 per million output tokens. It falls within the mid-range in comparison with different main LLMs, however primarily based on its top-tier bechmarking efficiency, it's what one may name a "steal."
Mannequin | Enter (per 1M tokens) | Output (per 1M tokens) | Whole Value (1M in + 1M out) | Supply |
Qwen 3 Turbo | $0.05 | $0.20 | $0.25 | |
Grok 4.1 Quick (reasoning) | $0.20 | $0.50 | $0.70 | |
Grok 4.1 Quick (non-reasoning) | $0.20 | $0.50 | $0.70 | |
deepseek-chat (V3.2-Exp) | $0.28 | $0.42 | $0.70 | |
deepseek-reasoner (V3.2-Exp) | $0.28 | $0.42 | $0.70 | |
Gemini 3 Flash Preview | $0.50 | $3.00 | $3.50 | |
Kimi-k2.5 | $0.60 | $3.00 | $3.60 | |
GLM-5 | $1.00 | $3.20 | $4.20 | |
ERNIE 5.0 | $0.85 | $3.40 | $4.25 | |
Claude Haiku 4.5 | $1.00 | $5.00 | $6.00 | |
Qwen3-Max (2026-01-23) | $1.20 | $6.00 | $7.20 | |
Gemini 3 Professional (≤200K) | $2.00 | $12.00 | $14.00 | |
GPT-5.2 | $1.75 | $14.00 | $15.75 | |
Claude Sonnet 4.5 | $3.00 | $15.00 | $18.00 | |
Gemini 3 Professional (>200K) | $4.00 | $18.00 | $22.00 | |
Claude Opus 4.6 | $5.00 | $25.00 | $30.00 | |
GPT-5.2 Professional | $21.00 | $168.00 | $189.00 |
That is roughly 6x cheaper on enter and almost 10x cheaper on output than Claude Opus 4.6 ($5/$25). This launch confirms rumors that Zhipu AI was behind "Pony Alpha," a stealth mannequin that beforehand crushed coding benchmarks on OpenRouter.
Nonetheless, regardless of the excessive benchmarks and low value, not all early customers are enthusiastic concerning the mannequin, noting its excessive efficiency doesn't inform the entire story.
Lukas Petersson, co-founder of the safety-focused autonomous AI protocol startup Andon Labs, remarked on X: "After hours of studying GLM-5 traces: an extremely efficient mannequin, however far much less situationally conscious. Achieves objectives through aggressive ways however doesn't purpose about its state of affairs or leverage expertise. That is scary. That is the way you get a paperclip maximizer."
The "paperclip maximizer" refers to a hypothetical state of affairs described by Oxford thinker Nick Bostrom again in 2003, during which an AI or different autonomous creation unintentionally results in an apocalyptic situation or human extinction by following a seemingly benign instruction — like maximizing the variety of paperclips produced — to an excessive diploma, redirecting all assets mandatory for human (or different life) or in any other case making life unimaginable by way of its dedication to fulfilling the seemingly benign goal.
Ought to your enterprise undertake GLM-5?
Enterprises in search of to flee vendor lock-in will discover GLM-5’s MIT License and open-weights availability a major strategic benefit. Not like closed-source opponents that maintain intelligence behind proprietary partitions, GLM-5 permits organizations to host their very own frontier-level intelligence.
Adoption isn’t with out friction. The sheer scale of GLM-5—744B parameters—requires an enormous {hardware} ground that could be out of attain for smaller corporations with out important cloud or on-premise GPU clusters.
Safety leaders should weigh the geopolitical implications of a flagship mannequin from a China-based lab, particularly in regulated industries the place knowledge residency and provenance are strictly audited.
Moreover, the shift towards extra autonomous AI brokers introduces new governance dangers. As fashions transfer from "chat" to "work," they start to function throughout apps and information autonomously. With out the sturdy agent-specific permissions and human-in-the-loop high quality gates established by enterprise knowledge leaders, the danger of autonomous error will increase exponentially.
In the end, GLM-5 is a "purchase" for organizations which have outgrown easy copilots and are able to construct a really autonomous workplace.
It’s for engineers who must refactor a legacy backend or requires a "self-healing" pipeline that doesn't sleep.
Whereas Western labs proceed to optimize for "Considering" and reasoning depth, Zai is optimizing for execution and scale.
Enterprises that undertake GLM-5 at this time should not simply shopping for a less expensive mannequin; they’re betting on a future the place probably the most precious AI is the one that may end the challenge with out being requested twice.

