Research Methods: How to design an effective survey

Frances Brown

A pile of wooden question marks with a wooden letters spelling word what? on top of it.

Photo by Vadim Bogulov on Unsplash

Surveys can be a great way to gather data from a large audience quickly. They can also be a fast way to collect dud numbers that tell you nothing. The outcome you get depends on your survey design. 

There are plenty of guides out there on how to structure individual questions but the success of a survey depends on how it works as a whole - each question needs to make sense in relation to the other questions and the entire survey needs to deliver enough data to provide meaningful, reliable evidence without exhausting or overloading respondents.

This guide will give you the basic principles that you need to bear in mind when designing a survey. If you’d like more detailed support or you’d like Nightingale to design a survey for you, get in contact. 

Figure out what you really need to know

In order to be effective, surveys need to gather reliable data that can be interpreted to provide relevant insight. To gather reliable data, you must first decide exactly what it is you’re trying to find out. If you’re unclear about what data you need, your questions are likely to be vague or overly complex, making the responses difficult to analyse and interpret.

You might want your survey to do a lot of heavy lifting, but surveys by their nature are always limited - because they’re text-based, they need to be short and easy to understand.

The goals of the survey must be focused and clear

If you can't identify how a particular question is contributing to those goals, ask yourself whether it really needs to be included. 

One way to define the goals of your survey is to start at the end - think about how you’ll report the results and what points or conclusions you want the data to inform or support. For example, you might want to demonstrate that, in a particular area, people are struggling to get access to primary healthcare. How will you quantify that? What questions would you need to ask?

You will need to:

  • Ensure your survey is targeted specifically at people in the area you’re interested in
  • Define ‘primary healthcare’ - does that just mean GPs or does it include other providers such as physiotherapists and nurse practitioners?
  • Define what ‘struggling’ means - does it mean they had to attempt to get an appointment more than once?
  • Decide on a timeframe - when do you think these issues started? In the last six months? Being specific will make the data more relevant and meaningful.
  • Determine whether you want to make any comparisons - do you want to compare your reference timeframe to another timeframe, to show a change? Do you want to compare the experiences of patients with chronic illnesses with those of patients who had to attend primary care for a single ailment? 

As you can see, building a relatively simple argument can involve quite a long process of decision-making around what sorts of data are relevant. However, going through this process is vital to ensure that your survey is thorough and that you can justify the results. 

Don’t gather data that you don’t need

Related to the task of figuring out what data you need is identifying data that you don’t need. Gathering data that doesn’t add to your understanding of a particular situation, or that is impossible to interpret or analyse makes your survey unnecessarily long and frustrating for participants. For example, it isn’t always necessary to know the age of your respondents, or their gender - if those details don’t add to the understanding of the key results, then don’t bother asking for them. 

Going back to the primary care example above, you might consider asking respondents why they needed to attend primary care. However, the range of responses is likely to be so large and variable that they’re impossible to analyse sensibly and thus add nothing to the final conclusions. You might want to ask whether their request was urgent or not, bearing in mind that you might need to clearly define ‘urgent’ in order to get consistent responses. If your focus is on getting access, the urgency or nature of the request might be entirely irrelevant.

The smallest change in language can have a big impact

In an interview, you can have an open conversation with your participant. They can ask for clarification if they don’t understand your question and they can provide additional detail to ensure their answer makes sense. In a survey, respondents either choose an option from a list or provide a short, free-form answer. Either way, their response will be based on their own interpretation of the question, which may not match your intended meaning.

Take this question for example: 

How often do you travel to work on public transport? 

It seems like a fairly simple question that most people could answer. But, without context a respondent might not be sure what ‘public transport’ means. Does it include taxis? Does the free bus that goes between different work sites count? What about driving to a park and ride, then doing the rest of the journey by tram? 

The ‘how often’ part of the question can also present problems around how to define quantities in a sensible way. One commonly used list of options is:

  • Never
  • Sometimes
  • Often 
  • Always

The advantage of a list like this is that the terms are simple and easy to understand. The disadvantage is that the terms are blunt and lack nuance. In particular, Never and Always are absolute terms that people may be reluctant to use - a respondent may choose Often even if they use public transport 99% of the time, because technically Always doesn’t apply. Analysing this data can also be difficult - what is the difference between Sometimes and Often? Does it tell you anything useful about respondents’ behaviour?

You could ask a much more specific question:

In the past week, how many times did you travel to work on public transport? (i.e. any transport that is available to the public and runs on a fixed schedule)

0  1  2  3  4  5  6  7  8  9  10 11 12 13 14+

Because this question refers to a particular timeframe and gives specific answers, it will give you much more granular and reliable data, with the caveat that the data will provide a highly specific ‘snapshot’ of a particular week, rather than an understanding of more general or typical behaviour. Respondents who were on holiday or were ill in the past week, for example, might choose 0, even if they would normally use public transport much more often. 

Given the pros and cons of different question types, the best way to narrow down your options is to go back to the decisions you made about what you want to find out. Think again about the purpose of the research and how you will use the results. Do you really need to know that 150 people used public transport 12 times in the past week? Is that a useful piece of data?

If you’re asking this question to show the differences in public transport use between particular areas, it’s probably best to use relatively crude measures that allow you to clearly identify those differences. A ‘typical month’ is a good measure for this type of question as it encourages respondents to think more generally about their behaviour across a relatively long span of time and to ignore unusual situations like holidays and illness. 

In a typical month, how many times do you travel to work on public transport (i.e. any transport that is available to the public and runs on a fixed schedule)

  • 0
  • 1-4
  • 6-20
  • 20+

When choosing intervals, it’s important to decide what those intervals will mean in your final analysis. In this example, the 0 option is deliberate - because it’s an absolute measure, respondents generally won’t choose this unless they genuinely never use public transport. Assuming there are about 20 working days in a typical month, 1-4 covers people who use public transport around once a week or less - this interval represents occasional users. 5-20 is a purposely large interval that covers people who use public transport quite regularly. The 20+ option is for respondents who use public transport an unusually large number of times, to travel between jobs, for example. It’s important to have this option to cover these outliers, but the number is likely to be small enough to disregard in the final analysis.

These data, along with data on the location of respondents, will give a clear picture of how many people in various areas never use public transport for their commute, use it occasionally, use it regularly or use it very often. If there are differences between areas - a particularly high number in one area who never use public transport for example - those should be easy to spot. More detailed data would likely only confuse the situation and make the analysis more difficult.

Respect your respondents

When there are specific things you want to know, it can be tempting to just go ahead and ask directly. For very simple questions, asking directly is appropriate, but for more complex situations there is often a gap between what you want to know and what your respondents can reliably tell you. You will likely need to bridge that gap by putting yourself in your respondent’s shoes and designing questions that they can answer easily and confidently.  

For example, you might want to know about how customers would respond to a restaurant staffed entirely by robots. It’s likely very few respondents will have any real-life experience of this situation, so any response they give is hypothetical and based on their imagination. Therefore, you can only ask about their emotional response to the idea of a robot-staffed restaurant. The responses might be interesting, but they won’t tell you anything reliable or concrete. 

Instead of focusing on what you want to know, focus on the information your respondent can give you. They might not know how they would feel about robot-staffed restaurants but they will probably have some experience of various types of automated service. Asking well-designed questions about those experiences will give you the data you need to extract insights that are relevant to the robot restaurant concept.

As a general rule, it's best to avoid asking hypothetical questions, or ones that require a respondent to imagine a particular situation - such as 'If X were available would you use it?' Every respondent will imagine something slightly different, making their responses so unreliable as to be almost meaningless. 

When you choose a question, ask yourself:

  • Is this something respondents are likely to know?
  • Is this something respondents are likely to care about?
  • Could this question be interpreted in different ways, or cause confusion?
  • Could this offend or upset a respondent in any way?

If you’re not sure about any of these factors, write the question down and ask someone to read and interpret it. Their response should tell you whether the question will work or not. 

Always test your survey

Before you distribute your survey out to your target audience, test it. The first round of testing should involve an intensive review of the questions. Ask at least one person who has never seen the survey before to read your introductory text and answer each question verbally, without any multiple choice options. Check whether they understand what the survey is about. Do they interpret the questions as you expected and do their answers match the options you’ve given? If they don’t, redesign the questions and test them again. 

The second round of testing involves putting out the survey to a small sample of your audience  - at least ten people. The aim is to determine whether the data that is generated makes sense and can be interpreted in the way you expect. At this point you might find that you are missing a key question, or that you are getting unexpected responses - you might need to revisit the questions or change the structure of the survey. 

Provide a clear introduction and gather consent

Always introduce your survey clearly - tell the respondent who is conducting it, who it’s aimed at and how the data will be used. Even if you’re not collecting sensitive data, it’s always wise to ask respondents for consent to use their responses - it is good practice and it shows the respondent that you respect the value of their data. 

Keep it simple and focussed

Designing a survey takes time and practice - if you’re new to designing them you might have to make a few mistakes before you learn what works and what doesn’t. The key is to focus on generating data that matches what you want to know as closely as possible, without adding any unnecessary noise or complexity.

Discover how we used these principles to design a survey for Public Practice