How we score marketing platforms
Transparent, methodology-first, published before reviews are written.
Every tool gets a single score from 1.0 to 10.0 calculated from four weighted dimensions, cross-checked with Bayesian-adjusted user data from Trustpilot. No vendor can pay for placement or a higher score.
The four dimensions
Pricing transparency
25%Clear pricing tiers published publicly. No hidden seat fees, usage cliffs, or invoice surprises. Free tier or trial that lets you genuinely evaluate the tool. Refund policy that actually works in practice.
Features
30%Depth and quality of core features for the platform's stated purpose. Whether automation logic and CRM behave as claimed under real load. Integration with the rest of a coach's stack (Kajabi, Stripe, Calendly, course platforms). Update cadence — is the platform actively improving or coasting?
Usability
25%Time to first useful automation or funnel. Quality of onboarding. UI responsiveness. Whether non-technical coaches can hit 80% of the value without reading docs or hiring a consultant.
Customer support
20%Response time on real tickets (we open them). Quality of answers — actual resolution vs scripted runaround. Refund and cancellation experience. Whether support actually escalates billing issues or stonewalls.
Bayesian Trustpilot smoothing
Raw Trustpilot averages penalize low-volume tools. A tool with 12 reviews at 4.2/5 carries less information than 8000 reviews at 4.0/5, even though its raw average looks higher.
Our formula pulls scores toward a neutral prior:
tp_normalized = (raw_score - 1) × (10 / 4)
bayesian = (n / (n + m)) × tp_normalized + (m / (n + m)) × C
where m = 15 (smoothing weight)
C = 7.0 (neutral prior on 1–10 scale)
n = number of Trustpilot reviewsA tool needs roughly 30+ reviews before user sentiment substantially moves the score. This protects against the “5-star/5-review” flattery problem common in newer tools.
Frequently asked questions
How do you weight the four scoring dimensions?
Pricing transparency 25%, features 30%, usability 25%, customer support 20%. The weights reflect what matters to coaches and course creators choosing platforms at the $10–$500/month range, where switching costs are real and broken promises matter more than fancy features.
Why use Bayesian smoothing for Trustpilot ratings?
Raw Trustpilot averages punish low-volume tools unfairly. A tool with 12 reviews at 4.2/5 looks worse than a tool with 8000 reviews at 4.0/5 — but the low-volume sample has high variance. Our formula pulls scores toward a neutral prior (C=7.0 on our 1–10 scale) with weight m=15, so a tool needs at least ~30 reviews before the user data substantially moves the score.
Do vendors pay for higher scores?
No. Scores are published before we check whether a tool has an affiliate program. If a vendor offers compensation in exchange for a higher score or removed criticism, we publish the request. We only join affiliate programs for tools scoring 7.5 or higher on our methodology — the reverse of how most affiliate sites operate.
How often are scores updated?
Quarterly for the full methodology refresh. Trustpilot data is re-pulled monthly. Major product changes (new pricing tiers, removed features, acquisitions) trigger an immediate re-evaluation.
What's the final score formula?
final_score = 0.8 × expert_score + 0.2 × bayesian_trustpilot. Expert score is the weighted average of the four dimensions. The user weight is intentionally low because most coaches care more about whether the platform fits their workflow than aggregate user sentiment — but it serves as a sanity check against expert blind spots.