Need smarter insights in your inbox? Join our weekly newsletters to get solely what issues to enterprise AI, information, and safety leaders. Subscribe Now
TikTok is making headlines once more at this time after the White Home joined the favored social media utility — however its mother or father firm ByteDance, a Chinese language net large, additionally had a shock announcement up its sleeve.
The corporate’s Seed Group of AI researchers at this time launched Seed-OSS-36B on AI code sharing web site Hugging Face.
Seed-OSS-36B is new line of open supply, massive language fashions (LLM) designed for superior reasoning, and developer-focused usability with a longer token context — that’s, how a lot info the fashions can settle for as inputs after which output in a single change — than many competing LLMs from U.S. tech firms, even leaders comparable to OpenAI and Anthropic.
The gathering introduces three major variants:
AI Scaling Hits Its Limits
Energy caps, rising token prices, and inference delays are reshaping enterprise AI. Be part of our unique salon to find how high groups are:
- Turning vitality right into a strategic benefit
- Architecting environment friendly inference for actual throughput positive factors
- Unlocking aggressive ROI with sustainable AI methods
Safe your spot to remain forward: https://bit.ly/4mwGngO
- Seed-OSS-36B-Base with artificial information
- Seed-OSS-36B-Base with out artificial information
- Seed-OSS-36B-Instruct
In releasing each artificial and non-synthetic variations of the Seed-OSS-36B-Base mannequin, the Seed Group sought to stability sensible efficiency with analysis flexibility.
The synthetic-data variant, skilled with further instruction information, constantly delivers stronger scores on normal benchmarks and is meant as a higher-performing general-purpose choice.
The non-synthetic mannequin, against this, omits these augmentations, creating a cleaner basis that avoids potential bias or distortion launched by artificial instruction information.
By offering each, the crew offers utilized customers entry to improved outcomes whereas guaranteeing researchers retain a impartial baseline for learning post-training strategies.
In the meantime, the Seed-OSS-36B-Instruct mannequin differs in that it’s post-trained with instruction information to prioritize activity execution and instruction following, fairly than serving purely as a basis mannequin.
All three fashions are launched underneath the Apache-2.0 license, permitting free use, modification, and redistribution by researchers and builders working for enterprises.
Which means they can be utilized to energy business purposes, inner to an organization or exterior/customer-facing, with out paying ByteDance any licensing charges or for utility programming interface (API) utilization.
This continues the summer time 2025 pattern of Chinese language firms delivery highly effective open supply fashions with OpenAI making an attempt to meet up with its personal open supply gpt-oss duet launched earlier this month.
The Seed Group positions Seed-OSS for worldwide purposes, emphasizing versatility throughout reasoning, agent-like activity execution, and multilingual settings.
The Seed Group, shaped in 2023, has targeting constructing basis fashions that may serve each analysis and utilized use instances.
Design and core options
The structure behind Seed-OSS-36B combines acquainted design selections comparable to causal language modeling, grouped question consideration, SwiGLU activation, RMSNorm, and RoPE positional encoding.
Every mannequin carries 36 billion parameters throughout 64 layers and helps a vocabulary of 155,000 tokens.
One of many defining options is its native long-context functionality, with a most size of 512,000 tokens, designed to course of prolonged paperwork and reasoning chains with out efficiency loss.
That’s twice the size of OpenAI’s new GPT-5 mannequin household and is roughly equal to about 1,600 pages of textual content, the size of a Christian Bible.
One other distinguishing factor is the introduction of a considering finances, which lets builders specify how a lot reasoning the mannequin ought to carry out earlier than delivering a solution.
It’s one thing we’ve seen from different latest open supply fashions as nicely, together with Nvidia’s new Nemotron-Nano-9B-v2, additionally accessible on Hugging Face.
In apply, this implies groups can tune efficiency relying on the complexity of the duty and the effectivity necessities of deployment.
Budgets are really useful in multiples of 512 tokens, with 0 offering a direct response mode/
Aggressive efficiency on third-party benchmarks
Benchmarks printed with the discharge place Seed-OSS-36B among the many stronger massive open-source fashions. The Instruct variant, specifically, posts state-of-the-art leads to a number of areas.
- Math and reasoning: Seed-OSS-36B-Instruct achieves 91.7 % on AIME24 and 65 on BeyondAIME, each representing open-source “state-of-the-art” (SOTA).
- Coding: On LiveCodeBench v6, the Instruct mannequin data 67.4, one other SOTA rating.
- Lengthy-context dealing with: On RULER at 128K context size, it reaches 94.6, marking the very best open-source end result reported.
- Base mannequin efficiency: The synthetic-data Base variant delivers 65.1 on MMLU-Professional and 81.7 on MATH, each state-of-the-art leads to their classes.
The no-synthetic Base model, whereas barely behind on many measures, proves aggressive in its personal proper.
It outperforms its artificial counterpart on GPQA-D, offering researchers with a cleaner, instruction-free baseline for experimentation.
For enterprises evaluating open choices, these outcomes counsel Seed-OSS gives robust potential throughout math-heavy, coding, and long-context workloads whereas nonetheless offering flexibility for analysis use instances.
Entry and deployment
Past efficiency, the Seed Group highlights accessibility for builders and practitioners. The fashions may be deployed utilizing Hugging Face Transformers, with quantization assist in each 4-bit and 8-bit codecs to cut back reminiscence necessities.
Additionally they combine with vLLM for scalable serving, together with configuration examples and API server directions.
To decrease limitations additional, the crew consists of scripts for inference, immediate customization, and gear integration.
For technical leaders managing small groups or working underneath finances constraints, these provisions are positioned to make experimentation with 36-billion-parameter fashions extra approachable.
Licensing and issues for enterprise decision-makers
With the fashions supplied underneath Apache-2.0, organizations can undertake them with out restrictive licensing phrases, an necessary issue for groups balancing authorized and operational considerations.
For determination makers evaluating the open-source panorama, the discharge brings three takeaways:
- State-of-the-art benchmarks throughout math, coding, and long-context reasoning.
- A stability between higher-performing synthetic-trained fashions and clear analysis baselines.
- Accessibility options that decrease operational overhead for lean engineering groups.
By putting robust efficiency and versatile deployment underneath an open license, ByteDance’s Seed Group has added new choices for enterprises, researchers, and builders alike.