fireworks/models/deepseek-v3p1
Display Name:DeepSeek V3.1
DeepSeek-V3.1 is post-trained on the top of DeepSeek-V3.1-Base, which is built upon the original V3 base checkpoint through a two-phase long context extension approach, following the methodology outlined in the original DeepSeek-V3 report. We have expanded our dataset by collecting additional long documents and substantially extending both training phases. The 32K extension phase has been increased 10-fold to 630B tokens, while the 128K extension phase has been extended by 3.3x to 209B tokens. Additionally, DeepSeek-V3.1 is trained using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats.
Specifications
Context160,000
Inputtext
Outputtext, json
Performance (7-day Average)
Uptime
TPS
RURT
Pricing
Input$0.56×1.1/MTokens
Output$1.68×1.1/MTokens