Source of truth

How we score every betting tool, in public.

This is the rubric every Sharkbetting comparison page is built on. Six weighted criteria, 21 tracked dimensions, anchored to live data from our 10-second refresh pipeline. You can adjust the weights below and watch the ranking move.

Almost every other comparison site you read picks a winner first and works backwards. That is what lets a tool with the biggest affiliate payout end up at number 1. Sharkbetting publishes the inputs, the formulas, and a public changelog of every methodology version we have shipped, so the math is auditable end to end.

Last updated April 29, 2026. Methodology v4.2.

Erik Andersson
Reviewed by Erik Andersson, Content & Marketing Specialist.Last updated
6weighted criteria
21dimensions tracked
2,400+price snapshots tested
v4.2 (Apr 2026)Methodology version
Open methodology

Built in EU.|GDPR compliant.|Permanent free tier.

Section 1 of 6

Adjust the weights yourself.

Most comparison sites publish a ranking and bury the weighting. That is how a tool that pays the biggest commission ends up at the top. We do the opposite. Below is the same six-axis rubric used on every Sharkbetting comparison page, with the default weights we believe match a full-time European bettor. Slide them. The ranking re-sorts in real time, and the bar widths re-calculate. There is no hidden multiplier.

Each axis below the widget explains what we measure on that criterion, why the default weight sits where it does, and where the underlying data comes from. Scores per tool come from a 12-hour benchmarking session run quarterly, and price snapshots are pulled hourly between rubric cycles to keep pricing scores honest.

The weights default to 25-25-20-15-10-5 across pricing, methodology, coverage, speed, UX, and trust. Those numbers are not random. They are calibrated against weighted feedback from active full-time bettors in the Sharkbetting community and tested against three alternative weightings to make sure the rubric was not silently locked to a single result. The current version ships in v4.2 of the changelog.

How we score every tool

Six axes, normalized to 100. Slide weights to match what you care about.

Ranking with these weights

  1. 1.
    Sharkbetting
    8.5
  2. 2.
    Trademate
    7.0
  3. 3.
    OddsMonkey
    6.9
  4. 4.
    RebelBetting
    6.8
  5. 5.
    OddsJam
    6.5
  6. 6.
    BetBurger
    6.5

Pricing

Default 25%
What we measure
Monthly cost at the entry tier and the highest tier, the size of the free or trial tier, and the per-feature unlock pattern (which axes are gated behind which plan).
Why this weight
Pricing is the first decision a serious bettor makes. A 109 EUR per month tool needs to clear a measurable extra hurdle versus a 1 EUR alternative, so pricing carries the joint-largest weight at 25 percent.
Data source
Self-serve checkout pages and active promo pricing on each vendor site, captured monthly. Currency converted to EUR using the same day mid-market rate.

Methodology

Default 25%
What we measure
Whether a tool publishes a baseline (Pinnacle, exchange, consensus), whether the math is documented, whether commission is applied, and whether the rating maps to a known statistical concept like closing line value.
Why this weight
A polished UI cannot rescue a fuzzy rating. Methodology gets the same 25 percent weight as pricing because it determines whether the alerts are real edges or noise.
Data source
Vendor documentation, public help articles, and 30-day-back-tested ratings versus closing prices on a 600-bet sample (NBA + EPL).

Coverage

Default 20%
What we measure
Bookmakers indexed, leagues and sports tracked, market types supported (1x2, totals, spread, props), and exchange or prediction-market integrations (Betfair, Smarkets, Polymarket).
Why this weight
Coverage gates the universe of available bets. We weight it 20 percent rather than higher because depth on the 20 books most bettors actually use beats a 70-book grid that only updates every three minutes.
Data source
Direct counts from each tool's filter UI, plus a manual audit of three sample matches per league to confirm the books that should appear are actually priced.

Speed

Default 15%
What we measure
End-to-end refresh latency from bookmaker odds change to user-visible alert, on the highest paid plan. Tracked on the same 12-hour bench across all tools.
Why this weight
Speed only matters once methodology and coverage are in place. We weight it 15 percent because anything under 15 seconds is functionally identical for value betting, and only arbitrage workflows reward sub-five-second refresh.
Data source
Two-week instrumented session per tool, comparing on-screen update timestamps with the underlying bookmaker page. Median, p90, and worst-case logged.

UX

Default 10%
What we measure
Time-to-first-bet from a cold signup, filter granularity, mobile parity, dark mode, keyboard accessibility, and how quickly an experienced bettor can clone a typical full-day workflow on the platform.
Why this weight
UX is decisive for new users and easily overweighted in glossy reviews. We cap it at 10 percent so a slick onboarding cannot mask weak pricing or methodology.
Data source
Two reviewers, each running a 30-minute structured task list (signup, filter setup, alert config, calculator use). Times averaged, friction noted.

Trust

Default 5%
What we measure
Years operating, public team, registered legal entity, GDPR posture, refund policy, and Trustpilot signal once we discount obvious review-farm spikes.
Why this weight
Trust is essentially a floor: a tool either clears the bar or it gets excluded entirely. We weight it 5 percent because among shipped tools the variance is small, and a hard exclusion (rather than a low score) handles the failure case.
Data source
Companies House and equivalent registries, the tool's About and Terms pages, Trustpilot and Reddit signal screened for inorganic patterns.
Section 2 of 6

How the tests run.

Every score on this page is anchored to a documented test protocol. The protocol was last revised on April 29, 2026 as part of methodology v4.2. The same protocol is used for every tool, every quarter, with no per-vendor adjustments.

Data sources

Methodology scoring is anchored to live data from our own 10-second refresh pipeline, which continuously samples bookmaker prices against Betfair and Polymarket exchange prices across NBA, NFL, EPL, La Liga, Bundesliga, and Champions League. Competitor scoring is based on documented features, published pricing, and trial-account observation — not automated competitor scraping. We do not pretend to have controlled access to other tools' internal alert pipelines.

Pricing-snapshot freshness is enforced by the live pipeline that powers Match View. Comparison verdicts are revisited each rubric cycle so a vendor cannot lock a single ranking by changing prices mid-quarter.

Markets covered

1x2 (match winner including draw), totals (over and under), and spread (handicap, point spread). Player props, corners, and cards are excluded from the comparison sample because pricing across competitor tools is too inconsistent to compare like-for-like.

Each market type is sampled at the same hour-of-day weighting so an evening-heavy tool cannot lap a tool with stronger morning coverage.

Tools compared

Every published Sharkbetting comparison covers the same 6 tools: Sharkbetting, OddsJam, OddsMonkey, Trademate, RebelBetting, and BetBurger. We add a tool only after it has been live for 12 months under the same ownership and after it clears our trust floor.

Each tool is tested on its highest paid plan (so the comparison reflects what a full-time bettor actually pays for) and on its free or trial tier (to score the on-ramp).

Noise controls

  • Bookmaker outage filtering. If more than 5 percent of indexed books return a 5xx in a given hour, that hour is excluded for every tool, not just the slowest.
  • Holiday-period exclusion. The 24 hours either side of Christmas, New Year, and Super Bowl Sunday are dropped because price discovery is degraded.
  • Affiliate quirk handling. Promo prices (signup boosts, free bets) are stripped before pricing is compared so a vendor can not buy its way up the table.
  • Latency normalization. Refresh times are measured against the same 12-hour bench window per tool to prevent off-peak measurement bias from inflating slower platforms.
  • Currency control. Every paid plan is converted to EUR on the date of the snapshot, using the same mid-market reference, so a tool that prices in GBP is not penalized when sterling weakens.

Reproducibility

The raw price-snapshot CSV plus the scoring sheet for every rubric cycle are available on request. Email methodology@sharkbetting.com with a short note about how you intend to use it. We have honored every request received in 2025 and 2026 within 48 hours. Once a year (typically in Q1) we publish a summary table comparing the previous 4 cycles so methodology drift is visible at a glance.

Section 3 of 6

What we actually compute.

Three formulas anchor everything on this page. The first is how we rate a single bookmaker price against a fair baseline. The second is closing-line value, the strongest known predictor of long-term betting profit. The third is the weighted-rubric score that drives the ranking in Section 1. Each one is short, deliberately, so you can audit it without a stats degree.

1. getDelta rating (exchange-baseline)

outcome_rating = ((bookmaker_price - 1) / (exchange_price - 1)) * 100

A rating of 100 means the bookmaker price is exactly equal to the fair exchange price. A rating of 105 means the bookmaker price is 5 percent above fair. Anything above 100 (after commission adjustment) is positive expected value, anything below is negative.

We use exchange prices (Betfair, Smarkets, Polymarket) instead of a bookmaker consensus because exchange prices are sharper. They reflect actual matched money, not a weighted average of soft books. A 95 EUR bet at 105 rating clears a smaller hurdle than at 105 versus a soft-book consensus.

Multi-exchange support means the rating uses the best available exchange price per outcome. A user can opt to compare against Betfair only, Polymarket only, or both, and the selection is applied at query time.

2. Closing-line value (CLV)

CLV = (entry_odds - closing_odds) / closing_odds * 100

CLV measures whether you bet at a sharper price than the market closed at. If you took 2.10 and the market closed at 2.00, you have +5 percent CLV. Long-run profit (with enough volume) tracks closing-line value with high reliability, which is why it is the standard sharps use to grade their own staking.

We log CLV on a 600-bet sample per cycle and use the per-tool median CLV as the methodology score floor. A tool that surfaces alerts which average +2 percent CLV is in a different league from one that averages -1 percent CLV, even if both look slick on screen.

Closing odds are pulled within 60 seconds of kickoff (or the official market close on prediction markets) from the same exchange feed used for getDelta.

3. Weighted-rubric score

score = sum(criterion_score_i * weight_i / 100)

The math is a weighted average. Each tool gets a 0 to 10 score on each of the 6 criteria, the weights are normalized to sum to 100, and the final score is the sum of (criterion score x normalized weight). Identical to what most comparison sites use internally and identical to what your sliders compute live in Section 1.

We publish the weights so the math is fully auditable. Set pricing to 0 and methodology to 100 if you want the methodology-purist ranking. Set coverage to 100 if you only care about how many bookmakers a tool indexes. The output is honest either way.

The score is rounded to one decimal place at the end. We never round intermediate values, because that would let a 0.05 difference on a single axis compound into a half-point swing on the final score. The slider in Section 1 uses the exact same rounding rule the back end does.

Worked examples

Methodology in action: 2 worked scenarios.

The formulas above answer the question of what we compute. The next question is whether the choice of baseline (exchange versus consensus) actually changes a verdict in practice. The two scenarios below walk through situations where it does. Both are illustrative and hypothetical, framed so the math is reproducible from the formula in Section 3 above. The match, the score, and the odds are constructed for teaching purposes; the arithmetic is real.

Scenario 1

High-liquidity match: exchange-baseline wins

Setup. Consider a hypothetical Premier League match between two top-half teams. Bookmaker pricing for the home win sits at decimal 2.10. Eight other bookmakers price it between 2.04 and 2.08, with consensus average 2.06. Betfair Exchange is showing 1.95 with around 80,000 GBP of matched liquidity.

Sharkbetting's getDelta (exchange-baseline). Plug the numbers straight into the formula from Section 3:

((2.10 - 1) / (1.95 - 1)) * 100 = 115.79

A rating of 115.79 is a strong value signal. The bookmaker is offering an outcome at roughly 15.8 percent better than the price liquid Betfair money is matching at.

Consensus-baseline. Swap the exchange price for the 2.06 bookmaker consensus and the same formula yields:

((2.10 - 1) / (2.06 - 1)) * 100 = 107.84

Same outcome, materially weaker rating. A 107.84 reading reads as a moderate edge that many bettors would skip after factoring in commission, variance, and bookmaker limit risk.

Why the gap matters here. Betfair at 80,000 GBP of matched liquidity represents real money positioned by sharp participants in a market that clears continuously. The 2.06 consensus averages eight bookmaker books that may all be pricing off the same upstream model, so the consensus is closer to a single opinion expressed eight ways than to eight independent reads. When exchange volume is high, the exchange price is the sharper anchor and exchange-baseline finds value the consensus quietly hides.

Takeaway: in high-liquidity EU markets, exchange-baseline finds value that consensus-baseline misses.

Scenario 2

Thin market: consensus might be sharper

Setup. Imagine a hypothetical second-division match in a smaller European league, late in the season. Bookmaker pricing for an under-2.5 goals market sits at decimal 2.05. Six other bookmakers cluster between 1.98 and 2.02, consensus 2.00. Betfair Exchange shows 2.40, but only around 400 GBP of matched liquidity (almost no real money active).

Sharkbetting's getDelta (exchange-baseline). Apply the formula again:

((2.05 - 1) / (2.40 - 1)) * 100 = 75.0

A rating of 75.0 sits well below the 100 fair-value line, which would normally read as no value at all. By this number alone a bettor would walk away.

Consensus-baseline. Swap in the 2.00 consensus and the picture flips:

((2.05 - 1) / (2.00 - 1)) * 100 = 105.0

A 105.0 rating is a moderate but real value signal. Two baselines, two opposite verdicts on the same bookmaker price.

Why methodology purity loses here. At 400 GBP of matched liquidity the Betfair price is more noise than signal. The 2.40 reflects one or two casual opinions sitting on the order book, not an aggregate of sharp money. Consensus across six independent bookmaker books is the sturdier anchor in this regime, and the 105.0 rating is the read that better matches what a careful bettor would conclude on their own. When exchange volume is thin, exchange-baseline becomes mathematically unreliable, so Sharkbetting publishes the rating and flags the liquidity context rather than pretending the number is a verdict.

Takeaway: on thin markets, consensus-baseline (or no signal at all) beats methodology purity.

The two hypothetical scenarios point in opposite directions, which is exactly the reason Sharkbetting publishes both the exchange rating and the liquidity context next to it. A user who understands the methodology can adjust their personal threshold (for example, only trusting ratings above 100 when matched liquidity exceeds 20,000 GBP) and use the rating as a tool. A user who does not can default to that same rule of thumb and still avoid the worst false positives. Either way, the math is on the page, the inputs are visible, and a single number never has to do the work of a verdict.

Read the full methodology, weights, and changelog in the sections above, or use the interactive weight slider to see how different priorities change the tool ranking in real time.

Section 4 of 6

Methodology changelog.

A public changelog matters because methodology drifts. Refresh intervals get faster, default commission gets revised, new exchange feeds come online. If a comparison page silently bumped its scores, you would have no way to spot the move. Every numbered version below is a real change to how Sharkbetting prices, scores, and ranks bets, dated and explained. Click any entry to see the full note. The most recent change is highlighted.

We treat methodology like product code. Material changes get a version, a date, and an explanation. Cosmetic edits do not. If a competitor ever changes how their rating is calculated and you have no way to find out, you should probably not be using their rating. The bar is the same for us.

Methodology whitepaper, version 1.0

The full technical specification: getDelta formula, multi-exchange aggregation, CLV protocol, and an honest list of failure modes. Cite-ready for journalists, analysts, and other comparison sites.

Download whitepaper

How the methodology has evolved

Every change to how Sharkbetting prices, scores, and ranks bets, dated and explained.

  1. Polymarket exchange prices now flow through the same getDelta CTE as Betfair. Users can compare bookmaker odds against Polymarket only, Betfair only, or both, and the system uses DISTINCT ON to pick the best exchange price per outcome. Brings full crypto and prediction-market coverage to value-betting alerts.

Section 5 of 6

Who runs these tests.

Methodology credibility comes down to who is running it. Below is the named author plus the supporting review chain. We name everyone who is named, and we are explicit when something is unnamed.

About the author

Erik Andersson
Erik Andersson

Content & Marketing Specialist

Erik writes Sharkbetting's product comparisons and methodology guides. He covers value betting, matched betting, and exchange-baseline rating systems for European sports bettors, with a focus on practical workflows and tooling decisions.

Read more articles by Erik Andersson
  • Lead author

    Erik Andersson, Content & Marketing Specialist. Writes every Sharkbetting comparison page and owns the rubric.

  • Internal review

    Sharkbetting engineering (Match View team, 4 engineers) reviews every methodology version before it ships and signs off on the rubric weights each quarter.

  • External review

    Methodology is audited annually by an independent data scientist. The most recent audit (March 2026) is available on request. We do not name the auditor publicly to keep the next audit blind.

We score Sharkbetting on the same rubric. Open the slider above, set methodology to zero, and Sharkbetting drops a slot. That is the test. If a comparison site cannot fail itself on its own metric, the metric is not real.
Erik Andersson, lead author
Section 6 of 6

Frequently asked questions.

Eight questions covering the parts of the methodology readers most often push on. Each answer is structured to give the direct answer first, then the supporting data, then a sentence of context. The same questions are exposed as FAQPage schema so the answers stay legible to AI assistants and search snippets.

  • Why publish the weights at all?

    Almost every comparison site you read hides the weighting. That is what lets a tool that pays the biggest affiliate commission top the list. Publishing the weights and letting you change them in the browser means the ranking you see is the ranking you asked for. If you want pricing at zero and coverage at 100, the slider supports it and our number 1 changes accordingly.

  • How often does methodology change?

    Material changes ship as numbered versions on the public changelog above. Since launch in 2024 we have shipped 7 numbered methodology versions, roughly one every 3 months. Each version note explains exactly what moved (refresh interval, default commission, new exchange supported) and why. Cosmetic tweaks like wording fixes do not get a version bump.

  • Can users see the raw test data?

    Yes. Email methodology@sharkbetting.com and we will share the price-snapshot CSV and our scoring sheet for the latest rubric cycle. We do not publish it as a one-click download yet because the file is roughly 90 MB and we want to preserve the chance to flag any reuse. We have honored every request received in 2025 and 2026 within 48 hours.

  • Why is pricing weighted 25 percent instead of 30 percent?

    We tested 25, 30, and 40 percent against a held-out sample of bettor surveys. At 30 percent and above, RebelBetting and OddsMonkey jumped above tools with materially better methodology purely on price difference. 25 percent is the highest weight at which methodology still gates the top 3, which we think reflects how full-time bettors actually choose.

  • Is the methodology peer-reviewed?

    Internally yes, externally not yet in a formal sense. The weights and formulas are reviewed by the Sharkbetting engineering team (4 people on the Match View squad) and revised quarterly. We commissioned an external statistical review in March 2026 from an independent data scientist; the resulting note (1 page, no conflicts found) is available on request.

  • Does Sharkbetting score itself in this rubric?

    Yes, transparently. Sharkbetting appears in the rubric tools list with the same scoring approach as competitors. We score ourselves 9.0 on pricing, 9.5 on methodology, 7.0 on coverage, 9.0 on speed, 6.5 on UX, and 8.5 on trust. We score lower than at least one rival on coverage and UX, which is the honest read.

  • What if a tool changes pricing or features mid-test?

    Mid-cycle vendor changes are caught by the monthly price re-pull. If a tool drops its starter tier from 49 to 19 EUR, the next rubric cycle reflects it and the changelog records the input change. We do not retroactively rewrite past cycles, because that would let a vendor lobby for a score adjustment after the fact.

  • How do you handle outages or stale data?

    Snapshots taken during a known bookmaker outage (more than 5 percent of indexed books returning a 5xx) are excluded from the speed and coverage axes for that window. We also exclude the 24 hours either side of major holidays (Christmas, New Year, Super Bowl Sunday) because price discovery is degraded and would unfairly punish slower-refresh tools.

Want the raw data?

Email methodology@sharkbetting.com. We share the price-snapshot CSV and the scoring sheet on request, usually within 48 hours.

Request the data