role

product design

company

physicswallah

platform

android & web

timeline

1-2 months

turning blind practice into personalised paths

for students to get a clear view about their proficiency & ace their respective exams

background

Despite thousands of students practicing daily across DPPs, Tests, Infinite Practice, and Sahayak, there was no clear picture of what students had mastered or where they struggled. Everyone received the same chapter questions regardless of proficiency. We saw this as a missed opportunity to help students learn smarter, not harder.

opportunity

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them with a customized study path to reach the next milestone in their journey.

We asked: Can we personalise a student’s learning path based on their chapter-level proficiency?

problem statement

Students were practicing blindly. They didn’t know what they were weak at, how close they were to being exam-ready, or what to do next.

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them with a customized study path to reach the next milestone in their journey.

design process

Three Separate Flows: Finding the narrative arc

We thought about how we can onboard/nudge the user onto our core flow to help them improve their overall proficiency and learn more about where they stood. Since it was no brainer for us to personalise student’s learning path based on their chapter-level proficiency, we set up triggers for onboarding/nudging the user onto our core personalised learning path flow based upon chapter completion at their respective batch level.


Therefore if ‘X’ Chapter has been completed in a student’s batch and the student has watched the live class/recorded class, we will trigger the onboarding/nudge to provide a personalised learning path for them & explaining their current proficiency level.

From that, we came up with first ever flow which leads the user onto our core flow:

Onboarding Nudge on Chapter Completion → Exam Level → Study Path

Since the onboarding nudges were thought in such a way that the user can check later/skip the nudges so that it doesn’t provide an intrusive experience to the student, it was again a no brainer to provide them an entry point whenever they wish to checkout their chapter level personalised study paths

For this, we thought of two additional flows from which the user can access these personalised study path:

Using our AI Chatbot to Nudge Students on Chapter Completion & Reminders → Exam Level → Study Path

Quick Access Shortcut on Homepage → Exam Level → Study Path

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Discard Designs — Onboarding, Reminders & Nudges

Proficiency Flow : Making Proficiency Obvious

We knew the ‘aha’ moment for the student was just gonna be seeing exactly where they stood after all the practice being done during the course of chapter completion. Along with this, we’d also had to solve for users wanting to know how this was calculated and how they could improve it.

So we made sure the proficiency level screens actually resembled a report that feels non-aggressive, shows the current proficiency of students and how it was calculated and finally nudge them towards generating their personalised study path.

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Discard Designs — Exam Level Proficiency

Study Path Flow : From Insight to Action

From the perspective of students, seeing the gap between where they stand and where they would want to be would be great but simultaneously we wanted to close that gap and to do that we needed a roadmap that never overwhelmed the students.

We thought a lot about how can we showcase a roadmap which doesn’t just feel like a long overwhelming path but one that they could relate to as well. Since our ultimate goal was to get them ready for their respective exams JEE & NEET; after a lot of iterations and discussions we came up to a decision if somehow we could make our milestones in such a way that it felt like they were themselves getting ready for exams like JEE Mains & JEE Advanced while completing their personalised roadmap that we generated for them after their proficiency report.

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Discard Designs — Personalised AI Study Path

Designing For Action : Easy to do Actions all the way

When we were designing our core flow which was exam level and study path screens we realised once students check their proficiency on exam level screens after a chapter gets completed in their batch, they would want to ask the question — what now?

The onboarding we designed was already helping the students understand their next action on Exam Level Screens, so to help them with the latter part of the flow (Exam Level & Study Path) we knew we had to design the study path screens for action for a seamless experience and we did this by,

Split study path practice into two unlockable modules based on their exams JEE (Mains, Advanced) & NEET (Pro, Advanced)

Swapping generic CTAs for context‑aware labels

Adding dynamic Estimated‑Time tags for each module (#Qs × Avg. time taken to complete that question)

What we went ahead with: Finalising the best of iterations

Onboarding, Reminders & Nudges

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Exam Level Proficiency

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Personalised AI Study Path

Our goal was to convert fragmented practice data into meaningful insights — to show each student where they stood, and guide them

Wrestling with Edge Cases : Locking it in from all ends

We had to make sure that this works across all cohorts of our students based on their proficiency and while going through the flow multiple times, we realised we were missing out on a number of things which could essentially improve the overall experience once added.

Rethinking the ‘Skip’ logic for attempting questions — If the student has skipped some questions in between and directly attempts last question, we need to prompt the student to attempt the skipped questions and if he has poor accuracy in those questions then we trigger the poor accuracy flow

Poor accuracy flow — which appended extra questions for the student to keep trying

Onboarding flows — which made sure we have separate onboarding flows for people coming through different entry points like our AI Guru, Homepage, Sahayak Screens

Copy, Motion, Web Screens, Feedback Mechanism & Dark Mode Polishing : Rethinking & Polishing

Handling copy the right way was really important for us because we were essentially working in a space where a lot of motivation is needed for a student to practice more, so designing for action went hand in hand with maintaining the right copy across the
screens — because even the little things matter

Along with that, I also worked upon the web screens, motion assets & polishing token usage to get to the dark mode in a click

Collaborations & Reviews Across Stakeholders : Every Feedback Counts

A larger part of working at PhysicsWallah was going through multiple feedback levels and since this project was crucial for company in the AI race, this went up to the CEO for feedbacks which is not the usual case for an organisation this large.

So we had to make it easier for them and other leadership roles across different teams to test out the flows without even opening Figma on their systems. Therefore I created this Prototype Handler which contained single action buttons to test out literally ALL the flows — A happy flow, feature flows, edge cases and so on.

Impact (Early Signals) : Validating Phase 1 out of 3

This was tested & released as A/B in nearly 20 mega-batches as part of Phase 1 of this project

For Phase 2 & Phase 3, we were looking to add overall Exam Level Proficiency as well which does not constrains itself to chapter level and we were also to looking to completely deprecate current revision/backlog/manual practice flows from our system so it was necessary for the Phase 1 release to perform well and it did!

Metric

Avg. questions per chapter (Study Path)

Students completing ≥ 1 milestone

Overall Qual Feedback
(“I know what to do next”)

Pre-launch

-

-

27%

Post 4 weeks

38% Improvement

62% Improvement

73%

Comments

New flow drove deeper practice

Clear level‑up goal boosted focus

Increased confidence & reduced guesswork

Long‑term effects (rank improvement) will surface in future JEE/NEET cycles.

Where I left off & How it continued further : Finding Path to Phase 2

As part of the next things I was discussing & working upon, we were looking to work around the motivation charter for students to increase these current metrics through gamification/octalysis framework.

We did this by thinking upon the lines of building a separate entity called “LevelUP” which dealt with rewards, pushing out badges upon reaching certain milestone under the umbrella of these two feature:

XP Addition

Exam/Batch Level Leaderboards

Some of the iterations that lead the path to LevelUP Integration & evolvement of this entity into something big

research & doubling down on our hypothesis

Objective:

  1. To understand the motivation of students to come to PW & their expectations from us

  1. To study the challenges faced by different student cohorts - split by their engagement level on PW

  1. To understand if the needs of student vary based on what stage the student is currently in their preparation journey

What did we observe:

  1. 75 % of JEE‑2025 students watched < 20 % of the very first chapter’s live/recorded lecture — an early drop‑off that threatened long‑term retention.

  1. Only 30 % of heavy DPP users felt the questions were “good enough” for exam prep; the rest called for better relevance and difficulty balance.

  1. A quarter of our poorly‑engaged cohort relied solely on PW as their study source, meaning we were the last line of defence against backlog anxiety.

Methodology For Research:

  1. Students were divided into different cohorts based on their engagement data to research similar behaviour groups:

    1. Poorly Engaged - Students who watched <20% of the streamed lecture content on PW

    2. Medium Engaged - Students who watched between 20%-50% of the streamed lecture content on PW

    3. Highly Engaged - Students who have either watched >50% of the streamed lecture content on PW or practiced >70% questions from uploaded DPPs

  1. The research was done using surveys triggered via WhatsApp & In-App campaigns

  1. Report is prepared based on the responses by ~18k students split across various cohorts

  1. The findings of the research were validated by doing customer callings to ~60 students across cohorts

Key Insights

Problem/Insight

25 % flagged backlog as #1 pain

Even high‑engaging students couldn’t answer “How ready am I?”

20 % complained DPP difficulty mis‑matched exam tiers

15 % asked for quicker feedback loops

Insight Framed

Backlog & time‑management anxiety

Progress blindness

Quality over quantity

Delayed doubt resolution

Basis for our thought process

Need a dynamic roadmap that prioritises practice, not just adds more.

Surface a single proficiency metric + visual marker.

Curate question sets tuned to milestones.

Real‑time marker & level‑up popups to reward correct attempts instantly.