Infrastructures
The Seed Infrastructures team oversees the distributed training, reinforcement learning framework, high-performance inference, and heterogeneous hardware compilation technologies for AI foundation models.
Research topics
Ultra-large-scale training clusters
Study methods to improve the stability and model flops utilization (MFU) of large scale training clusters, including cross-cluster, low precision, fault tolerant and elastic training techniques.
Large-scale
Stability
Reinforcement learning systems
Research on end-to-end large model reinforcement learning systems, designing the next-generation RL systems under dynamic loads, complex agent/environment interactions, heterogeneous resources, and multimodal scenarios.
Reinforcement learning
Agent
Optimization
Inference parallelization solutions
Research on overcoming compute and memory access bottlenecks during inference, including multi-node inference and parallel inference strategies on heterogeneous hardware.
Inference
Parallel
Next-generation model and hardware co-optimization
Research on advanced model architectures, training and inference paradigms by co-designing next-generation hardware systems with next-generation generative and understanding model architectures.
Systems-algorithm co-design
Model architecture
Compiler optimization for heterogeneous hardware
Research on high-performance operator compilation and joint optimization of computation and communication for emerging hardware architectures.
Heterogeneous systems
Compiler

Selected Papers

Aug 04, 2025
Seed Diffusion: A Large-Scale Diffusion Language Model with High-Speed Inference
We present Seed Diffusion Preview, a large-scale language model based on discrete-state diffusion, offering remarkably fast inference speed. Thanks to non-sequential, parallel generation, discrete diffusion models provide a notable speedup to mitigate the inherent latency of token-by-token decoding, as demonstrated recently (e.g., Mercury Coder, Gemini Diffusion). Seed Diffusion Preview achieves an inference speed of 2,146 token/s over H20 GPUs while maintaining competitive performance across a sweep of standard code evaluation benchmarks, significantly faster than contemporary Mercury and Gemini Diffusion, establishing new state of the art on the speed-quality Pareto frontier for code models.
Yuxuan Song, Zheng Zhang, Cheng Luo, Pengyang Gao, Fan Xia, Hao Luo, Zheng Li, Yuehang Yang, Hongli Yu, Xingwei Qu, Yuwei Fu, Jing Su, Ge Zhang, Wenhao Huang, Mingxuan Wang, Lin Yan, Xiaoying Jia, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Yonghui Wu, Hao Zhou
Computation and Language
May 17, 2025
Model Merging in Pre-training of Large Language Models
Model merging has emerged as a promising technique for enhancing large language models, though its application in large-scale pre-training remains relatively unexplored. In this paper, we present a comprehensive investigation of model merging techniques during the pre-training process. Through extensive experiments with both dense and Mixture-of-Experts (MoE) architectures ranging from millions to over 100 billion parameters, we demonstrate that merging checkpoints trained with constant learning rates not only achieves significant performance improvements but also enables accurate prediction of annealing behavior. These improvements lead to both more efficient model development and significantly lower training costs. Our detailed ablation studies on merging strategies and hyperparameters provide new insights into the underlying mechanisms while uncovering novel applications. Through comprehensive experimental analysis, we offer the open-source community practical pre-training guidelines for effective model merging.
Yunshui Li, Yiyuan Ma, Shen Yan, Chaoyi Zhang, Jing Liu, Jianqiao Lu, Ziwen Xu, Mengzhao Chen, Minrui Wang, Shiyi Zhan, Jin Ma, Xunhao Lai, Deyi Liu, Yao Luo, Xingyan Bin, Hongbin Ren, Mingji Han, Wenhao Hao, Bairen Yi, LingJun Liu, Bole Ma, Xiaoying Jia, Xun Zhou, Siyuan Qiao, Liang Xiang, Yonghui Wu
LLM
Apr 02, 2025
Exploring Data Scaling Trends and Effects in Reinforcement Learning from Human Feedback
Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning large language models with human preferences. While recent research has focused on algorithmic improvements, the importance of prompt-data construction has been overlooked. This paper addresses this gap by exploring data-driven bottlenecks in RLHF performance scaling, particularly reward hacking and decreasing response diversity. We introduce a hybrid reward system combining reasoning task verifiers (RTV) and a generative reward model (GenRM) to mitigate reward hacking. We also propose a novel prompt-selection method, Pre-PPO, to maintain response diversity and enhance learning effectiveness. Additionally, we find that prioritizing mathematical and coding tasks early in RLHF training significantly improves performance. Experiments across two model sizes validate our methods' effectiveness and scalability. Results show that RTV is most resistant to reward hacking, followed by GenRM with ground truth, and then GenRM with SFT Best-of-N responses. Our strategies enable rapid capture of subtle task-specific distinctions, leading to substantial improvements in overall RLHF performance. This work highlights the importance of careful data construction and provides practical methods to overcome performance barriers in RLHF.
Wei Shen, Guanlin Liu, Zheng Wu, Ruofei Zhu, Qingping Yang, Chao Xin, Yu Yue, Lin Yan
Machine Learning
Learn More

Featured Jobs

Research Scientist in ML Systems
Seattle / San Jose
Experienced Hiring
Apply Now
Software Engineer, ML System Architecture
Seattle / San Jose
Experienced Hiring
Apply Now
Research Scientist, Applied Machine Learning
Seattle / San Jose
Campus Recruitment
Apply Now
Software Engineer in Machine Learning Systems
Seattle / San Jose
Campus Recruitment
Apply Now
Software Engineer Intern (Seed - Machine Learning System)
Seattle / San Jose
Internship
Apply Now
Research Scientist Intern (Seed - Machine Learning System)
Seattle / San Jose
Internship
Apply Now