PM Interviews - Data Driven Decision Making (Part - 4)

Daily active users of your fintech app have dropped by 25% this week. What will you do?

If daily active users (DAUs) of my fintech app dropped by 25% in just a week, I would treat it as a critical red flag and move fast, but methodically. My first step would be to diagnose the root cause—before reacting, I’d need to understand what changed.

I would begin by slicing the data:

 

  • Is the drop across all user segments or just new users, a particular region, or platform (iOS/Android)?

  • Did it affect certain flows more—like login, KYC, money transfer, or bill payment?

Are there recent app releases, backend issues, or policy changes that correlate with the timing?

Once I isolate potential triggers, I’d dig into funnel-level analytics to pinpoint where users are dropping off. If login failure or app crashes are visible in the logs, I’d immediately loop in engineering to triage. I’d also monitor social media, app store reviews, and customer support queries for signs of UX friction or broken trust.

If the drop is linked to an internal issue—say, a failed update or new onboarding barrier—I’d prioritize a fix and communicate proactively to users. If it’s external—like a competitor’s aggressive campaign or seasonal behavior—I’d consider reactivating users with contextual nudges, cashback offers, or personalized push notifications.

Finally, I’d work with marketing and product analytics to track recovery daily, set up alerts for future anomalies, and conduct a retro to document learnings. A 25% drop in DAUs is serious—but if handled quickly and transparently, it can also build resilience into the product and trust with users.

What KPIs would you track for an online learning product?

If I were managing an online learning product, I’d structure KPIs around the learner lifecycle—from acquisition to engagement, completion, and long-term retention. Each stage tells us something different about product-market fit, user motivation, and value delivery.

  1. Acquisition & Activation

 

  • New Signups / Daily Active Users (DAUs) – Are we attracting the right audience?
  • Activation Rate – % of users who start their first course/module within 24–48 hours.
  • Cost of Acquisition (CAC) – Marketing efficiency.

 

  1. Engagement

 

  • Session Frequency & Duration – How often and how long do users engage?
  • Time Spent per Module – Indicates content engagement.
  • Quiz Participation Rate – Measures interaction and attentiveness.

  1. Learning Outcomes

 

  • Course Completion Rate – Are users finishing what they start?
  • Assessment Pass Rate – Reflects effectiveness of teaching methods.
  • User Feedback / Ratings – Qualitative signals of content quality.

  1. Retention & Monetization

 

  • D7 / D30 Retention – Are learners coming back?
  • Paid Conversion Rate – From free to paid users.
  • Average Revenue per User (ARPU) – Business viability.

 

  1. Community & Advocacy

 

  • Referral Rate – Are users recommending the platform?
  • NPS (Net Promoter Score) – Overall satisfaction.
  • Forum/Community Participation – Social learning health.

 

An A/B test shows no significant difference in conversion. How would you interpret it?

If an A/B test showed no significant difference in conversion, I would see it as a data point, not a dead end and I’d dig deeper before deciding next steps.

First, I’d check the statistical validity of the test. Was the sample size large enough? Did we run it for a sufficient duration to account for seasonality or user behavior patterns? Sometimes a “no difference” result may actually be an underpowered test rather than a true indication of equality.

Next, I’d review test design and execution : were both variants exposed to similar audience segments? Were there any external factors (marketing campaigns, outages, competitor promotions) that might have influenced results?

If the test was sound, I’d interpret the outcome as a sign that the change didn’t meaningfully impact user behavior. That could mean the feature wasn’t addressing a strong enough user pain, or the difference was too subtle to influence decisions.

From there, I’d decide between:

  • Iterating on a bigger, bolder change that’s more likely to shift behavior.
  • Pivoting to test a different hypothesis if the original assumption seems weak.
  • Keeping the existing version if the new approach adds complexity without measurable benefit.

PM Interview Preparation Series (Parts 1 to 5)

Get More Insights

QUICK LINKS

POLICIES

CONTACT

2024 GoCrackIt – All Rights Reserved