CAS.AI mobile publishing: KPIs, UA testing, monetization, in-app purchase

CAS.AI Method in Action: Reliable, Scalable Mobile Publishing

At CAS.AI we do mobile publishing the pragmatic way—clear KPI gates, Tenjin-powered analytics, and ILD monetization—so we’re scaling apps efficiently, not blindly. Start small, validate, then grow what works.

Who We Are

PSV games started in Ukraine and moved its headquarters to Cyprus in 2014. Today we operate two offices in Cyprus and one in Ukraine, with a 150-person distributed team. Our portfolio includes roughly 500 apps: about 300 kids’ games (including our own Hippo brand and licensed IP like Masha and the Bear, LOL, etc.) and around 100 non-kids titles. Actively, we run UA for ~100 apps, with 40–45 of them receiving major acquisition scale at any time.

From Mediation to Mobile Publishing

We initially built our own monetization stack and mediation (“CAS.AI”), then partners asked for help with UA because monetization performance was strong. This is how our publishing arm was born about 18 months ago. We now publish and scale UA for roughly 20 external studios, supporting marketing, UA, and monetization end-to-end.

How We Evaluate New Games

Before committing to scale, we review in-game metrics and run a small paid UA test. We primarily focus on hyper-casual and casual genres. Our baseline acceptance thresholds:

  • Retention. D0 (Day 0): ≥ 20% • D7: ~5–7% • D30: ~2–3% (higher is better).
  • Playtime / session length. Casual & hyper-casual: 7–10 minutes per session is a solid starting point.
  • Ad load & formats. Ad-only games: ~5–7 ad impressions per user/day. If mostly interstitials: target ~7/day. If interstitial + rewarded mix: ~5–7/day total (e.g., ~5 interstitials + 2 rewarded, adjusted via tests). Banners can lift LTV ~10%, but watch UX: they often bother users unless placed carefully.

Recommendation for studios: Set genre-specific guardrails, not absolutes. Use these benchmarks to decide “go/no-go” after a small test, then tune per cohort (geo/device/channel).

UA Testing for Mobile Publishing Scale

We start with small budgets—typically $150–$200 per channel—to validate economics before scaling. Historically we began with Google Ads + Firebase, but today we diversify channels. Example: a recent Mintegral test passed our KPI, so we scaled spend and added sources.

A typical early KPI might be D0 ROAS ≥ 67% (illustrative; varies by genre/country). If ad-mediation revenue on D0 exceeds the KPI threshold, we greenlight scaling and broaden channels.

Recommendation for studios: Define a per-genre D0/D1/D7 ROAS template (and margin of error). If a channel beats your target with stable retention and CPMs, promote it to your “always-on” set.

Monetization Stack and Impression-Level Data (ILD)

We operate our monetization via CAS.AI mediation and collect impression-level revenue data (ILD). For every ad impression, we know the network, placement, format, and revenue. This feeds our dashboards and lets us see which combinations of network × geo × format × placement drive profitable scale.

  • Ad-first games: start with interstitial + rewarded; add banners only with no UI overlap and good viewability.
  • Hybrid games: lead with rewarded, add native/immersive where you can provide 2+ seconds of safe on-screen time, and place interstitials only at clean milestones.

Recommendation for studios: Treat ILD as a must-have. It’s the fastest way to spot dying placements, underperforming networks, or geos with inefficient CPMs.

Analytics: Tenjin as the MMP and Data Backbone

  • Attribution + MMP basics (our single source of truth for UA).
  • Raw Data + DataVault access for analytics engineering.
  • Custom dashboards by game/platform/geo/channel/campaign.
  • Cohort breakdowns for ROAS, retention, LTV, CPM/eCPM, CTR/CR by country, device, source, and creative.

We import historical data (last 30/90/120 days) to analyze behavior and forecast LTV. Then we align creative and channel strategy with the best-performing cohorts.

Learn more about our analytics approach in thisTenjin × CAS.AI interview with our CMO , where we discuss DataVault usage, ILD workflows, and ROAS tracking in practice.

Recommendation for studios: Start with an MMP that gives you raw data. Let your analytics team own a single self-serve dashboard with the exact cuts UA managers need (channel × geo × D0/D7 ROAS, retention, CPI, eCPM).

Reporting, Access, and Collaboration

Internally, different teams (UA, analytics, monetization) receive role-based access (e.g., admin for analytics to build dashboards, read raw data). For publishing partners, we grant scoped access—they can see only their own apps and create reports. This transparency speeds up joint decision-making without exposing unrelated portfolios.

Recommendation for studios: Implement RBAC and per-app scoping for partners. Transparency builds trust; isolation protects data.

Example Outcome (What “Pass” Looks Like)

  • Early test on a new network meets/exceeds D0 KPI (e.g., ≥67% ROAS).
  • Ad mediation shows higher-than-target D0 revenue per user.
  • Retention and session length hold under added ad load.

Decision: scale spend, add more channels, iterate creatives, and retest geos with best early CPMs.

What Other Studios Can Apply (Actionable Takeaways)

  • Define acceptance KPIs per genre — use D0/D7/D30 retention and D0 ROAS guardrails to decide “test → scale” quickly.
  • Start small; prove unit economics — $150–$200/channel is enough to validate before you ramp.
  • Ad mix by design. Hyper-casual: 5–7 impressions/day total; interstitials at clean breaks; rewarded for boosts; banners only if they don’t harm UX. Hybrid: lead with rewarded + native/immersive; interstitials only at milestones.
  • Collect impression-level data (ILD) — prune weak placements, negotiate floors, and tune waterfall/bidding strategy.
  • Use an MMP with raw data — Tenjin (or similar) + a central dashboard by geo × channel × campaign accelerates decisions.
  • Segment by geo from day one — your D0 KPI in Tier-1 vs Tier-3 will differ. Don’t average them.
  • Respect UX — banners can add ~10% LTV only if they avoid UI overlap and accidental clicks.
  • Scale only what passes — if a channel beats your KPI and retention holds, move it to “always-on.” If it slips, pause and triage.

Closing: CAS.AI’s Way of Mobile Publishing and Scaling Apps

CAS.AI evolved from a kids-games studio to a global publishing and mediation platform by standardizing acceptance KPIs, testing UA in small, data-driven steps, running ad monetization on ILD, and building analytics on reliable MMP data. If you’re a studio looking to publish or to scale UA/monetization, adopt these practices—then iterate relentlessly.

Interested in publishing or mediation support?

Get in touch with CAS.AI. And if you need an MMP that scales with you, Tenjin remains a simple, powerful choice for attribution and raw data workflows.

Helpful resources

Loading

Share this article
Shareable URL
Prev Post

Kids Mobile Games 2025: How to Win Tough Developer Challenges

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next

Надішліть заявку

Введіть ім'я.

Введіть адресу електронної пошти

Обов'язкове поле

Обов'язкове поле

Паролі мають збігатися

Введіть дійсне посилання

На конференції Рекомендація друга Telegram канал Facebook LinkedIn Youtube Форуми розробників (reddit.co, r10.net, тощо) MGAM PK - LinkedIn Group / CAS Mediation WhatsApp Group Інше

Ваші облікові дані використовуватимуться для доступу до панелі інструментів Cas.ai, тому нам потрібне ваше схвалення

Заявка успішно надіслана!
або зв'яжіться через

Перевірте вашу електронну пошту (включно зі спамом).

If you have received a copy of this message
to the e-mail address you provided, your registration is successful.

Ваша заявка обробляється.

Your personal manager will contact you within 48 hours.
Otherwise, please use the help of our support team.

[email protected]