Chemistry vs. Biology: Which Science Suits Your Interests?

Even if you've never read Alexander Pope's “An Essay on Criticism” from 1711, you're probably familiar with its most famous passage: “to err is human; to forgive, divine.”
But when it comes to data, errors are not so easily forgiven. Errors corrupt the accuracy and certainty of data sets, and large, significant errors can render the results entirely useless.
So when it comes to survey errors, the more you have, the less certainty you can put into the results. That reduced certainty is often referred to as “confidence intervals,” and when confidence intervals grow too large, the data becomes uninformative—or even entirely misleading.
In order to ensure that you don't end up wasting your time on a survey that produces worthless results, it's important to recognize the most common survey errors and learn how to avoid them.
All survey errors are either sampling errors or non-sampling errors.
A sampling error is something that naturally occurs when you survey a smaller sample of a larger population. For example, the Gallup U.S. Poll does not actually survey all 209,128,094 American adults each year. In 2017, they only interviewed about 1,500 U.S. adults per week, or around 78,000 people total that year.
Gallup interviewed a representative sample of adults across all 50 states, but not all adults, and the extent to which the sample inevitably differs from the entire population is a sampling error, or margin of error.
A non-sampling error is any error that results from the way the survey is designed or executed. For example, if the Gallup poll were to ask a leading question, that could affect how a respondent answers. Even the order in which questions are asked could unintentionally skew the responses.
Another common non-sampling error is response bias, which occurs when you have a balanced sample, but a disproportionate number of responses from one group. So, for example—Gallup calls both landlines and cell phones, but if people with landlines answer calls more often than people with cell phones, that creates a response bias. Since landline users tend to be older than cell phone users, that could lead to a survey in which older adults are overrepresented, while younger adults are underrepresented.
As mentioned above, some degree of sampling error is often inevitable, and can be accounted for with a margin of error.
But non-sampling errors are often partly or even entirely avoidable. And these include the type of survey errors that can result from the way your survey is written and displayed.
So before you start designing your next survey, take a look at these common types of survey errors (and how to avoid them).
When writing a survey questionnaire, the language you employ should be as neutral as possible. You never want to “lead the witness,” as they say in courtrooms.
There different types of loaded questions, including:
To avoid these sorts of loaded questions, carefully consider the language that you use. Avoid words that feel too emotionally charged—whether positive or negative. Also avoid persuasive phrases, like “won't you?” or “didn't it?”
Also, aim for consistency in your language. If you use the word “could” in a question, keep using it throughout the survey—don't switch to “should” or “might.” Even synonymous words can hold different connotations in peoples' minds.
It's not just what questions you ask, but when you ask them. You don't want your question flow to accidentally influence the responses.
For example, pollsters often start by asking someone, “What is the most important issue currently facing the nation?” Then they ask, “Do you approve or disapprove of the job the president is doing?” Whatever issue they said was the most important—inflation, guns, democracy, you name it—is likely the issue they'll have in mind when rating the president's performance.
This is known as “priming,” and your question flow might be unintentionally priming respondents to give certain answers.
One way to try and correct for this bias is to randomize the question order for each respondent. Sogolytics allows you to automatically randomize or rotate the question sequence every time, meaning each survey participant will see a different order of questions.
Make sure that your multiple choice are always mutually exclusive.
For example, here's how not to ask someone their age:
Q. How old are you?
a. 17-21 b. 21-25 c. 25-29 d. 30 or above
If you're 21 years old, do you pick A or B? And should a 25-year-old select B or C?
To avoid confusion, the age range categories should be written as:
a. 17-20 b. 21-24 c. 25-29 d. 30 or above
This is also true for more subjective questions. For example, a customer satisfaction survey shouldn't include options that are too similar—like both “somewhat satisfied” and “slightly satisfied.”
It's always frustrating when you're asked to choose from a list of options that doesn't include the one you're looking for.
For example, imagine how a non-binary person feels when a survey asks them to select their gender, and then forces them to choose between “male” and “female.”
Ideally, your multiple choice options should cover 90% or more of likely responses. But just in case they don't be sure to add an “other” option at the end, which allows participants to write in their answer.
Sometimes the questions that seem simple and straightforward are actually too vague and difficult to answer.
For example, “How often do you go to your local coffee shop?”
That may sound pretty direct, but it's actually missing some qualifiers. Coffee addicts might wonder if you mean how many times do they go each month, each week, or each day!
A better way to phrase the question might be, “thinking about the last month, how often have you visited your local coffee shop?”
Writing a survey is not like writing the Great American Novel. A questionnaire is not the place to show off your eloquent prose style and mastery of the English language.
Instead, stick to plain, everyday language that can easily be understood by all participants (aim for a sixth-grade reading level, which sites like Hemingway App can help you assess). Also, avoid using jargon or confusing acronyms. At the very least, always write out an acronym the first time you use it—for example, “How important is Customer Experience (CX) to you?”
There are some details you absolutely need to collect, but not all of the questions in your survey should be mandatory.
It's especially important to allow participants to skip sensitive questions. It's also a good idea to add an “I don't know” or “I would prefer not to say” answer option.
Survey designers are often compelled to stuff multiple queries into one question. That may make the survey shorter, but it also makes the data less valuable.
For example, a satisfaction survey might ask, “How would you rate the friendliness and helpfulness of our staff?”
But that should really be two separate questions: “How would you rate the friendliness of our staff?” and “How would you rate the helpfulness of our staff?”
After all, some store clerks can be very friendly without being particularly helpful, while others might be very rude but able to answer your question!
Few surveys are perfect, and it's almost impossible to entirely eliminate unintended bias. But that doesn't mean you shouldn't try!
As sharpshooters say, if you aim small, you'll miss small. Carefully consider each question, each phrase, and each word you use in your survey. Once you feel good about your survey design, test it on a handful of people, and then ask them if any of the questions were confusing, leading, or otherwise flawed. Use their feedback to perfect your survey until it's as error-free as it can be. You can try creating your survey with this free survey software for students.