Workflow Eval Detail

Summarization

Generates concise summaries and smart tags from your content—perfect for search, discovery, and quick recaps.

Latest Runcompleted
muxinc/ai
mainb7cce22·@mux/ai v0.13.1
Cases
23
Avg Score
0.98
Avg Latency
7.59s
Avg Cost
$0.0035
Avg Cost / Min
$0.0062/min
Avg Tokens
2,737
TL;DR

High-quality summarization across providers, with `claude-sonnet-4-5` best on quality and `gemini-3.1-flash-lite-preview` best on latency/cost, but results are only directional given the 6-case sample size.

Best Quality
anthropic
claude-sonnet-4-5
Fastest
google
gemini-3.1-flash-lite-preview
Most Economical
google
gemini-3.1-flash-lite-preview

What we measure

Each eval run captures efficacy, efficiency, and expense. We use this data to compare providers and track regressions over time.

Efficacy
Quality + correctness
Efficiency
Latency + token usage
Expense
Cost per request

Workflow snapshot

Suite statussuccess
Suite average score0.95
Suite duration52.87s
Last suite runApr 3, 07:56 PM

Evaluation criteria

From eval tests

We score summary quality, tag relevance, and semantic similarity while tracking latency, token usage, and cost.

...and so when we look at the numbers...
...the growth trajectory has been...
...our teams have worked incredibly hard...
...which brings me to the next point...
Analyzing
AI Summary8:00 duration
6 tags extracted
Complete
Semantic Extraction
Efficacy checks
  • Title is non-empty, <=100 chars, and avoids filler starters.
  • Description is non-empty, <=1000 chars, and avoids meta phrases.
  • Tags are non-empty strings, unique, and <=10 items.
  • Title, description, and tags are semantically similar to references.
  • Response includes asset ID and HTTPS storyboard URL.
Efficiency targets
  • Latency: scores are normalized between 0 and 1. Under 8s earns 1.0; past 20s trends toward 0.
  • Token usage: scores are normalized between 0 and 1. Under 4,000 tokens earns 1.0; higher usage reduces the score.
  • Usage data must include input and output tokens > 0.
Expense guardrails
  • Estimated cost under $0.015 per request for full score.

Provider breakdown

Run b7cce22
Efficacy scoreHigher is better
LatencyLower is better
Token UsageLower is better
CostLower is better
ProviderModelCasesAvg ScoreAvg LatencyAvg TokensAvg CostAvg Cost / Min
anthropicclaude-sonnet-4-540.995.79s3,108$0.011$0.0189/min
googlegemini-2.5-flash40.987.23s2,386$0.0029$0.0047/min
googlegemini-3-flash-preview40.996.49s2,889$0.0026$0.0054/min
googlegemini-3.1-flash-lite-preview40.993.57s2,319$0.0007$0.0013/min
openaigpt-5-mini40.9317.27s3,633$0.002$0.0043/min
openaigpt-5.130.994.36s1,870$0.0016$0.0026/min

Recent cases

Latest 6
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
5.26s
Cost
$0.0109
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
5.07s
Cost
$0.0108
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
6.26s
Cost
$0.0115
google ·gemini-2.5-flashApr 3, 07:58 PM
Asset 88Lb01q
Score
0.95
Latency
6.76s
Cost
$0.0027
google ·gemini-2.5-flashApr 3, 07:58 PM
Asset 88Lb01q
Score
1
Latency
4.13s
Cost
$0.0017
anthropic ·claude-sonnet-4-5Apr 3, 07:58 PM
Asset 88Lb01q
Score
0.98
Latency
6.59s
Cost
$0.0108