{"id":2807,"date":"2015-06-04t17:29:57","date_gmt":"2015-06-04t17:29:57","guid":{"rendered":"\/\/www.catharsisit.com\/?p=2807"},"modified":"2015-06-04t17:29:57","modified_gmt":"2015-06-04t17:29:57","slug":"optimized-student-happiness-using-ab-testing","status":"publish","type":"post","link":"\/\/www.catharsisit.com\/blog\/optimized-student-happiness-using-ab-testing\/","title":{"rendered":"how we optimized for student happiness using a\/b testing"},"content":{"rendered":"
magoosh is all about making sure our students are well educated and happy<\/a>. \u00a0but we\u2019re also a data-driven business that uses metrics to make decisions — vague notions of happiness are nice, but we want numbers!<\/p>\n so this is the story of how we improved student happiness by a\/b testing changes to our product with the goal not of optimizing clicks or conversions or revenues, but of maximizing student happiness. \u00a0to start, though, i\u2019ll introduce the metric at hand: net promoter score.<\/p>\n net promoter score<\/a> is a metric that tells you, on the whole, how willing your customers are to promote your product. \u00a0customers are asked on a scale of 1-10 how likely they would be to recommend your product; 9s and 10s are considered \u201cpromoters\u201d, 7s and 8s are neutral, and anything below 6 is a \u201cdetractor.\u201d \u00a0your net promoter score is calculated by subtracting the number of detractors from the number of promoters and dividing by the total number of respondents. \u00a0as a result, nps is a percentage somewhere in the range of -100% (all detractors) to 100% (all promoters). \u00a0not to brag, but our nps is high. \u00a0really high. \u00a0apple high<\/a>.<\/p>\n at magoosh, nps is one of the most important metrics we track — it helps us determine not only whether students like our customer service and user interface, but also how well our products prepare students for their exams. \u00a0and most importantly it has been a reliable leading indicator of growth in word-of-mouth referrals — our largest marketing channel. \u00a0when nps is high, students talk about magoosh and more people buy it!<\/p>\n <\/a><\/p>\n historically, we\u2019ve asked students the nps question after<\/i> they\u2019ve taken their exams (and, importantly, seen their final scores). \u00a0we do this because our products prepare students for tests, and, really, the proof is in the pudding. \u00a0you can\u2019t fully decide if you\u2019re willing to recommend magoosh for gre prep until you\u2019ve taken the real gre. \u00a0the downside is that it can take a while for us to see nps change in response to product changes. \u00a0since we\u2019re waiting until after students are done studying to survey them, it can take months between when a student sees a new feature and when she rates our product.<\/p>\n because nps is such an important metric to our company, we take changes very seriously. \u00a0earlier this year we saw nps for our gmat product dip fairly significantly. \u00a0looking into why, we discovered that several passive and detractor students were complaining that they were getting lower scores on their real gmat than they did on their magoosh practice tests.<\/p>\n <\/a><\/p>\n our algorithm was telling students to expect one score, but, for some, their official reports were coming back lower — obviously a frustrating experience. \u00a0these students were still improving their scores significantly, but once you\u2019ve got a 750 in your mind, a 700 seems disappointing! \u00a0we determined that we needed to fix our score prediction algorithm to be more accurate, but we were left with a major concern: would an improved algorithm that displayed a lower predicted score be demoralizing for students? \u00a0which was worse for customer satisfaction — a lower predicted score while studying, or a disappointing final score after the exam?<\/p>\n normally when we have questions about what works best for conversion or marketing, we run a quick a\/b test to determine what works best. \u00a0but nps was different — we\u2019d never a\/b tested for nps optimization before, and our nps collection survey only went to students after their exams. \u00a0it would be months before students who saw the changed algorithm took their exams and we got back nps data. \u00a0making a significant change without knowing how it would affect our word of mouth marketing was a big risk.<\/p>\n we determined that in order to a\/b test the algorithm change, we needed a method for collecting nps data while students were still studying — not just waiting til the end of their exam. \u00a0we began using a third-party tool called wootric<\/a>, which allows us to ask the nps question in our product and analyze the data in real-time. \u00a0we then deployed the changed algorithm to half of our gmat students, and we could then match the \u201clikely-to-refer\u201d rating to students in the treatment and control groups. \u00a0suddenly nps had a new use case for us — as a powerful, agile product tool.<\/p>\nnps: our reliable referral indicator<\/b><\/h2>\n
our nps issue: mismatched expectations<\/b><\/h2>\n
the challenge: could we optimize quickly for nps?<\/b><\/h2>\n
our solution: \u00a0bring nps inside our product<\/b><\/h2>\n