fireworks/models/deepseek-r1-0528

Common Name: Deepseek R1 05/28

Fireworks
Released on Oct 16 12:00 AMSupportedTool Invocation
CompareTry in Chat

05/28 updated checkpoint of Deepseek R1. Its overall performance is now approaching that of leading models, such as O3 and Gemini 2.5 Pro. Compared to the previous version, the upgraded model shows significant improvements in handling complex reasoning tasks, and this version also offers a reduced hallucination rate, enhanced support for function calling, and better experience for vibe coding.

Specifications

Context
160K
Inputtext
Outputtext

Performance (7-day Average)

Collecting…
Collecting…
Collecting…

Pricing

Input$1.49/MTokens
Output$5.94/MTokens

Availability Trend (24h)

Performance Metrics (24h)

Similar Models

$0.99/$0.99/M
ctx125Kmaxavailtps
InOutCap

Qwen2.5-VL is a multimodal large language model series developed by Qwen team, Alibaba Cloud, available in 3B, 7B, 32B, and 72B sizes

$0.99/$0.99/M
ctx160Kmaxavailtps
InOutCap

A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token from Deepseek. Updated checkpoint.

$0.99/$0.99/M
ctx128Kmaxavailtps
InOutCap

Llama 3.3 70B Instruct is the December update of Llama 3.1 70B. The model improves upon Llama 3.1 70B (released July 2024) with advances in tool calling, multilingual text support, math and coding. The model achieves industry leading results in reasoning, math and instruction following and provides similar performance as 3.1 405B but with significant speed and cost improvements.

$0.99/$0.99/M
ctx128Kmaxavailtps
InOutCap

The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.