HomeTop SeedSeed EdgeResearchNewsJoin Us
banner

Top Seed

Talent Program

The Top Seed Talent Program 2026 is now open to PhD candidates graduating between September 2025 and August 2026. Join us and push the boundaries of AI.
Research Focus
Infrastructures
The Seed Infrastructures team oversees the distributed training, reinforcement learning framework, high-performance inference, and heterogeneous hardware compilation technologies for AI foundation models.
Vision
The Seed Vision team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI.
Speech
The Seed Speech team focuses on the forefront of research and product development in speech and audio, music, natural language understanding, and multimodal deep learning.
LLM
The Seed Large Language Model (LLM) team is dedicated to aggressively advancing the next generation of LLMs, tackling fundamental challenges in LLM development head-on. Our areas of focus include model pretrain, posttrain, inference, memory capabilities, learning, interpretability and other related directions.
Multimodal Interaction & World Model
The Seed Multimodal Interaction and World Model team is dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products.
Research
Computer Vision
Seedream 3.0 Technical Report
Seed Vision Team

2025-04-15

LLM

Seed-Thinking-v1.5: Advancing Superb Reasoning Models with Reinforcement Learning

Jiaze Chen, TianTian Fan, Xin Liu, Lingjun Liu, Zhiqi Lin, Mingxuan Wang, Chengyi Wang, Xiangpeng Wei, Wenyuan Xu,Yufeng Yuan, Yu Yue, Lin Yan, Qiying Yu, Xiaochen Zuo, Chi Zhang

2025-04-10

LLM

Recitation over Reasoning: How Cutting-Edge Language Models Can Fail on Elementary School-Level Reasoning Problems?

Kai Yan, Yufei Xu, Zhengyin Du, Xuesong Yao, Zheyu Wang, Xiaowen Guo, Jiecao Chen

2025-04-01

Reinforcement Learning

DAPO: An Open-Source LLM Reinforcement Learning System at Scale

Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Hongli Yu, Weinan Dai, Yuxuan Song, Xiangpeng Wei, Hao Zhou, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Lin Yan, Mu Qiao, Yonghui Wu, Mingxuan Wang

2025-03-18

Core Machine Learning

FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference

Xunhao Lai, Jianqiao Lu, Yao Luo, Yiyuan Ma, Xun Zhou

2025-02-28

Computer Vision

Shot2Story: A New Benchmark for Comprehensive Understanding of Multi-shot Videos

Mingfei Han, Linjie Yang, Xiaojun Chang, Lina Yao, Heng Wang

2025-02-05

Computer Vision

MaskBit: Embedding-free Image Generation via Bit Tokens

Mark Weber, Lijun Yu, Qihang Yu, Xueqing Deng, Xiaohui Shen, Daniel Cremers, Liang-Chieh Chen

2024-12-08

Doubao Models
Doubao Pro
128k context length with fine-tuning. Stronger and more comprehensive abilities
Doubao Lite
Lite version with lower token cost and improved latency
Doubao Character
Individualized character creation capabilities, greater context-awareness and plot-driven ability
Applications
larkvolcdoubaobutterflyhualudreamina