< Back to Blog Posts

The top 3 mistakes Facebook made when writing their survey (and 3 tips on how to avoid them)

By David Hagmann, Bella Rozenkrants and Kristen Berman

In a famous study on organ donations, researchers found that simply changing the default option from checking a box to become an organ donor to checking a box to not be an organ donor vastly increases how many people become donors.

This may not be surprising. Defaults are powerful. But what if we asked participants to take a survey about why they chose to become organ donors as soon as they came out of the DMV?

It turns out people are really good at rationalizing their decisions. We are excellent storytellers. It’s likely that most of these people coming out of the DMV would make up elaborate stories rationalizing their choice. “I believe in altruism and karma” or “I don’t trust doctors at all.”  It’s not likely that anyone would have said, “I was too lazy to change the default.”

Does this story imply that we can’t trust people when we ask them why they feel a certain way or when we ask them to explain an action that they took?

Not quite. In fact, many companies rely on asking their customers what they think about the product. They care about what features people like and what features they want to see released next.  And, given that product managers are not mind readers, surveys have an important role in helping companies understand their customers.

In this spirit, Facebook launched a site-wide survey this week.  But as the organ donation exercise showed us, just asking people what they want is a dangerous game.  Designing a good survey is critical to collect feedback we can build a strategy on.

The Facebook survey is an example of how NOT to ask customers questions.

“For a simple customer satisfaction survey, there are an awful lot of brain teasers in here.” – Jeff Bercovici, Forbes Magazine on the Facebook survey


Here are three ways FB got the survey wrong:

1. They asked broad questions


Broad questions are great for PowerPoint slides but terrible for actionable insights and decisions.  If Facebook gets data back that says users  between age 18-24 trust them “somewhat”, what can they really do with this information?

This question will highlight differences in trustworthiness between different demographics, but we still won’t know if the ratings are abnormal for these groups (and thus actionable). The survey answer won’t tell the product manager in charge of trust what she should do next.

A better approach would be ask about trust generally, but then also define trust in relation to someone else (e.g. Twitter, Google+) to find out if the metric is meaningful.  Or, even better, they could call out specific characteristics of trustworthiness to isolate the problem. This could include asking users how likely it is that their personal information is being used inappropriately to target advertisements or how their personal information is shared with others on Facebook.


2. The scales used in the survey are hard to normalize

Imagine you are asked if you like taking long trips. What is long? To someone in New York, 30 minutes on the subway may not be that long a trip, to someone in San Francisco it may feel much longer.


Let’s look at the above question. What does it mean to be “in control” of one’s personal information? What people interpret and define as ‘in control’  may vary wildly across respondents.  Some may think it means you could easily export your data on a moments notice. Some may think it implies how much Facebook uses your data with advertisers.

A better question would be something like, “If you wanted to delete a post about you that you don’t like, how confident are you that you could find the place in Facebook to do this?”  This question would focus users on the specific issue – the ease of privacy controls. It would provide a measurable and actionable metric (confidence). And, it would help Facebook isolate the variables that concern or delight users and contribute to general satisfaction and trust.


3. Overall Clarity

When we ask general questions, the respondent is forced to think about how to interpret the question.

In the below example,  the idea of satisfaction is not immediately clear:  “Am I satisfied today or in general?” “Compared to Google +?” “With the general features or with Facebook’s strategy?”  As we know from social science research, we usually tend to choose the path of least resistance.

In the case of a survey, people may just ‘check out’ if a question is complex. They may click on an answer in order  to quickly get through the survey and thus avoid the deep reflection that is required.


That doesn’t mean there’s no place for surveys.  There is. Academics use them a lot, but they have to be used carefully and require particular expertise.


Here are 3 survey tips to get data that represents how your users actually behave:

  • Don’t get users to project an opinion about something general, be specific.
    • Try asking about the ‘last time’ a user did something.  For example, “Think about the last time you had a positive experience on FB, what happened? How satisfied were you with this experience?” This allows you to understand an experience at the behavior level and avoid having users project their satisfaction levels or what they think a positive experience would or could be.
    • If you do need to create a fake situation (e.g., “Imagine that your FB account just got hacked and all of your contacts were sent an inappropriate message”)  create a visceral way for the respondent to understand this fake situation. Videos or comics are nice ways to do this. Very detailed comics or videos ensure the participant interprets the question as you intended.
  • Use relative scales 
    • How long would you walk? How much would you pay? Then, ignore the absolute values for these answers. Report on the relative differences between each answer. This helps you understand the general ranking of different ideas without getting caught up on the exact answers.
    • For example, absolute reporting, “The majority of users would pay $20 for this”  doesn’t reveal much about general willingness to pay.  A result like, “30% more users would pay $20 than $50” helps us understand the tradeoff we’d have to make in order to charge $20 vs $50.
  • Treat surveys like an experiment
    • Send two surveys.  This is called a “between subject” design. Treat one survey like a control. Treat the second survey like a test condition. Then, change one variable in the second survey.
      • For instance, imagine if Facebook sent two surveys. One survey would show us a video on their privacy policy and then ask us to evaluate their trustworthiness. The control survey would just ask us to evaluate their trustworthiness. This experimental survey would isolate how much the user’s understanding of the FB privacy policy improves their perceptions of trust.
      • “Experimental surveys” can be a more effective strategy than just sending a solo survey, because we are measuring the relative differences between specific answers vs. an absolute score.
    • Randomize the questions to avoid the answer to one question influencing the answer to another question.
      In one study, researchers asked participants two questions: how happy are they in general, and how many dates they’d gone on recently. If the happiness question was posed first, then there was no relationship between happiness and the number of dates. But if the dating question came first, people who had been on fewer dates reported being less happy.
  • And of course, after the survey, run an experiment.
    • While surveys can build great intuition quickly, they are the beginning of research. If there is a low metric (like satisfaction or trust) and the survey reveals an interesting hypothesis on why (privacy controls), we should try running in-product experiments to see what will increase this metric.
      • Surveys are great at helping to narrow down the plethora of hypotheses that we have at the beginning stages of projects. However, we want to run an actual experiment to get insights that we can base a business strategy on.


So what will Facebook do with their survey data? We’re not sure. It’s a bit too general to have actionable insights that a product manager or designer could take and run with. But, given how good people are at rationalizing things, we’re confident that the researchers at Facebook will find some takeaways. : )

Speak Your Mind


Protected by WP Anti Spam

You couldn’t resist, could you?

Join to learn more irresistible experiences and events.