Workflow Eval Detail

Caption Translation

Converts captions into multiple languages, helping you reach global audiences without manual translation work.

Latest Runcompleted
muxinc/ai
mainb7cce22·@mux/ai v0.13.1
Cases
18
Avg Score
0.96
Avg Latency
10.61s
Avg Cost
$0.0058
Avg Cost / Min
$0.0102/min
Avg Tokens
1,995
TL;DR

High-quality, low-cost caption translation across providers with good latency, but model comparisons are based on small per-model samples and should be treated as directional.

Best Quality
anthropic
claude-sonnet-4-5
Fastest
google
gemini-3.1-flash-lite-preview
Most Economical
google
gemini-3.1-flash-lite-preview

What we measure

Each eval run captures efficacy, efficiency, and expense. We use this data to compare providers and track regressions over time.

Efficacy
Quality + correctness
Efficiency
Latency + token usage
Expense
Cost per request

Workflow snapshot

Suite statussuccess
Suite average score0.96
Suite duration3 minutes 56 seconds
Last suite runApr 3, 07:56 PM

Evaluation criteria

From eval tests

We validate VTT structure, translation faithfulness, and language code integrity, plus performance and budget targets.

English Translation
"Together we can reach more"
EN / ENG • Confidence 100%
Contextual Localization
Efficacy checks
  • Translated VTT starts with WEBVTT and keeps timestamps.
  • Cue count matches the original and translation differs.
  • Faithfulness scoring against the original transcript.
  • Language codes match ISO 639-1/3 and are consistent.
  • Response preserves asset ID and language fields.
Efficiency targets
  • Latency: scores are normalized between 0 and 1. Under 8s earns 1.0; past 15s trends toward 0.
  • Token usage: scores are normalized between 0 and 1. Under 2,500 tokens earns 1.0; higher usage reduces the score.
Expense guardrails
  • Estimated cost under $0.012 per request for full score.
  • Usage data must include total tokens for cost analysis.

Provider breakdown

Run b7cce22
Efficacy scoreHigher is better
LatencyLower is better
Token UsageLower is better
CostLower is better
ProviderModelCasesAvg ScoreAvg LatencyAvg TokensAvg CostAvg Cost / Min
anthropicclaude-sonnet-4-5317.47s1,189$0.0099$0.0172/min
googlegemini-2.5-flash314.73s1,653$0.0028$0.0049/min
googlegemini-3-flash-preview30.8816.56s4,633$0.0124$0.0216/min
googlegemini-3.1-flash-lite-preview30.992.75s1,194$0.001$0.0018/min
openaigpt-5-mini30.9227.26s2,389$0.004$0.0069/min
openaigpt-5.1314.9s912$0.005$0.0087/min

Recent cases

Latest 6
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
6.68s
Cost
$0.0098
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
8.37s
Cost
$0.0101
google ·gemini-2.5-flashApr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
4.57s
Cost
$0.0027
google ·gemini-2.5-flashApr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
6.18s
Cost
$0.0038
google ·gemini-2.5-flashApr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
3.45s
Cost
$0.002
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
7.36s
Cost
$0.0097