Low-Star Ratings & Reviews

UDEMY, Q4 2017 to Present

course_rate.gif

Udemy is a global marketplace for learning and teaching online where students are mastering new skills and achieving their goals by learning from an extensive library of over 65,000 courses taught by expert instructors.

 
 

At a Glance

The Team: Product designers (x2), project managers (x2), engineers (x3-ish)
My Role: Product designer
Project Date: October 2017 - January 2018 (currently being A/B tested on the platform)
Responsibilities: Provide context in brainstorm sessions as domain knowledge expert in support; design user flows; collaborative prototyping.
Design Tools Used: Pen and paper, whiteboards, post-its, Sketch
Design Methods Used: Job stories, user flows, user interviews, click data

 
 

Context of the Project

Students can leave a star rating and can opt to leave a written review for a course they have purchased. At this time, if a student rates a course with 1 star, they are allowed the option to write a review, and that’s the end of the flow. The PM conducted research showing that students who leave a low star rating have a lower LTV than those who leave a high star rating.

The problem we hoped to tackle: How might we help students who have had a bad experience with a course at the right time?

Having worked on both student support and in the instructor community, and also having run a proactive support program to reach out to students who left a bad review a couple of years prior, I was recruited for this project. The lead designer was my mentor for my first in-house project at Udemy.

 

 

Project Process

Define, Part 1

The Problem

Prior to my involvement, the PM had a one-pager of the research they had done in order to define the problem. With data on student trends in refunds and low-star ratings, they had a job story to understand the pain point of the disappointed students and their expectations.

 

 

Empathise

BRAINSTORMING

 Bullet points, mini wireframes, and quick sketches gave us a sense of how we would tackle the problem.

Bullet points, mini wireframes, and quick sketches gave us a sense of how we would tackle the problem.

To discuss potential solutions to the problem addressing the job story, we had a brainstorming session to see how we would want these problems to be addressed if we were in the student’s shoes. We set 10 minutes on a timer, and the three of us silently wrote out how we can address the experience post-low-star. After that, we wrote out our answers on the whiteboard and looked for patterns.

We found that our flows were more or less the same: As soon as the student selects a star rating from 1-3, they should be given the option to refund the course immediately rather than figuring out how to write in to support.

Synthesising Knowledge from Old Projects

On support, I worked on a project affectionately referred to as PBR tickets: proactive bad reviews. When a student wrote a bad review, it would automatically create a ticket that gets sent straight to Zendesk, and our team could reach out to the student to offer assistance. I wrote my answers based on the responses we crafted for these types of tickets, asking if they would like a refund, along with a shortlist of three courses similar to the one they rated poorly. Because these were proactive outreaches, the students were often pleasantly surprised with our email and would accept the refund and repurchase, sometimes purchasing all three.

 

 

Ideate

We agreed to work on designing a single flow:

  1. Student leaves 1-3 star review

  2. Module offers a refund

  3. Show student recommendations that might work for them instead

The product designers were tasked with creating the flow for each of the instances where the star ratings could be presented:

  • Within the course

  • On the course dashboard

  • From the My Courses page


LOW-FIDELITY WIREFRAMES

I worked with the lead designer to create a few wireframes based on what I knew from my previous project on proactive support for students who gave bad reviews. We each sketched our thoughts at separate intervals and came together to compare and decide what to share in an upcoming design review.


DESIGN REVIEW

I put together a low-fidelity wireframe in Sketch to show what one of the potential flows could look like. Since it was my first project, I wasn’t sure how I could possibly create a wireframe when I had very little access to the research and didn’t know what the students were actually saying at this time.

At the end of the day, the design team agreed that the main issue was that we did not know what problem we were solving for just yet. The copy was one of the main issues, along with the timing of when this flow would pop up in the student experience. The suggestion was to do more research, and I wholeheartedly agreed.

 

 

Define, Part 2

CHANGING OF THE PM

 Seamlessly swapping seats.

Seamlessly swapping seats.

By early November, the first PM had been tasked with another set of problems to tackle, and a recently hired senior PM was tasked with the low-star project. The lead designer explained the decisions we had made so far, and I talked about the context of the problem based on my experience in support.

 

They were very much interested in doing more user research, and so I was tasked with creating a set of questions to conduct user interviews with students who had recently (within the last week or so) left low-star ratings and had ideally written a review for a course. This data was easily available to us since the PBR filter was still turned on within support’s Zendesk integration.

 

WORKING WITH ENGINEERING

I was invited to sit in with the engineering team that would be responsible for implementing the changes to the project. Prior to this, I informed the PM and the lead designer of what currently existed in the codebase from the previous experiment on PBR tickets. I also informed them of the internal tools we currently had available to assist students. Checking in with the engineering team ensured that we were all on the same page while keeping in mind more edge cases that we may not have considered.

 

CREATING THE UX RESEARCH QUESTIONS

I wrote a first draft of questions, which I ran through with the lead researcher on our UX research team. My questions were generally good, but had to be reorganised in order to ensure that our participants were on the right track with each question asked. [I wrote a Medium article detailing how I refined these questions] after their guidance.

After two more drafts, I worked with another UX researcher to finalise the list and order of questions. I provided the PM with details on how to procure our audience. The team started looking for participants and scheduled user interviews for the next couple of weeks. I attended one of these sessions and took copious notes, discussing the findings with the UX researcher and the PM.

 

RESEARCH FINDINGS

 "All I wanted to do was learn." -- Not a quote from Community.

"All I wanted to do was learn." -- Not a quote from Community.

Our UX researcher provided us with a synthesis of the patterns gleaned from the six participants interviewed. We wanted to understand the most common scenarios for leaving a low rating, how students would want Udemy to respond (and what they expected in the first place), and who they “blamed” for the bad course experience.

 

We found that a majority of students gave low ratings when a course did not match their expectations; while they did not blame Udemy (a mix of “it just happens” or even blaming themselves, and one or two blamed the instructor), the interviewees ultimately did not achieve their goal of learning what they had set out to learn on Udemy.

 

 

Gathering Data

RESEARCH RECOMMENDATIONS

The UX research team provided us with a set of recommendations and a proposed flow.

  • If we kept the review flow as is, we might want to consider changing the question, or potentially change the weight of the reviews, since they’ve already made it clear that the course was not in line with the student’s expectations.

  • If we moved forward with the exchange flow, students would need to have relevant courses shown to them immediately to help them attain their goal of learning what they came to Udemy to learn.

We knew that we wanted to help students get to the course they wanted, but we weren’t sure what we should offer them to get to that point. Was a it a refund or an exchange? Did they want to simply contact the instructor? These attributes would have to be researched further in order to get to the next iteration.

 

RUNNING THE EXPERIMENT

I was out for a couple of weeks on vacation, which is when the development of the experiment began. The PM wanted to run the proposed flow with several options to a small percentage of students who gave a low rating; this is so that we could see which attribute was selected the most.

After a few tweaks with the engineering team and checking in with the support team (since the experiment tied in with support, and the tickets still had to be answered manually by the team), the experiment was launched. The data we’re looking at is what people clicked on when the review box came up.

As of January 2018, the experiment has been expanded to go to all students leaving a low rating on a course, after confirming that the volume for support was manageable. It is currently ongoing.

 

 

Learnings

This was the first project where I was a part of the product design team rather than just a domain expert. While we thought that it would be a simple process, the supposedly small project snowballed into something much more complex.

I learned that design doesn’t happen in systematised chunks. Each meeting had a purpose, and things were changed meaningfully as we got our research together.

The true hero of this story is the reminder to step back and ask ourselves: what is the problem we’re really trying to solve? This is permission to allow space to research and reflect before charging ahead at full speed without a direction.