TA/mistralai/Mixtral-8x22B-Instruct-v0.1
Common Name: Mixtral 8x22B Instruct v0.1
TogetherAI
Released on Feb 17 12:00 AMMistral AI's larger Mixtral 8x22B MoE model instruction-tuned, hosted on TogetherAI.
Specifications
Context
128K
Inputtext
Outputtext
Performance (7-day Average)
Collecting…
Collecting…
Collecting…
Pricing
Input$1.32/MTokens
Output$1.32/MTokens
Availability Trend (24h)
Performance Metrics (24h)
Similar Models
$0.66/$1.87/M
ctx128Kmax—avail—tps—
InOutCap
DeepSeek V3.1 hybrid model combining V3 and R1 capabilities with 128K context, hosted on TogetherAI.
$1.38/$1.38/M
ctx64Kmax8Kavail—tps—
InOut
DeepSeek V3 MoE model with 671B total parameters and 37B active, hosted on TogetherAI.
$2.20/$2.20/M
ctx128Kmax—avail—tps—
InOut
DeepSeek R1 reasoning model distilled to Llama 70B architecture, hosted on TogetherAI.
$0.97/$0.97/M
ctx128Kmax—avail—tps—
InOut
Meta's Llama 3.1 70B optimized for fast inference on TogetherAI.