This glossary of survey research terminology — which you can think of as a survey dictionary or survey wiki —  contains definitions of commonly used survey terms. Links are also provided to our articles that discuss the survey glossary terms.

The definitions given here are in the context of a survey project.  However, many of terms are used in other fields.

For a more detailed discussion, please refer to our Survey Guidebook or consider attending one of our Survey Design and Survey Data Analysis Workshops where we discuss many of these concepts in detail in the context of designing and executing a successful survey project.


A  B  C  D  E  F  G  H  I  J  K  L  M  N  O  P  Q  R  S  T  U  V  W  X  Y  Z

A — Survey Glossary

Accuracy: The extent to which a survey result represents the attribute being measured in the population.  Accuracy or “Margin of Error” is usually expressed as a plus or minus percentage, e.g., “+/- 5%”, which indicates that the survey mean score likely deviates from the population mean for that attribute by less than 5%.

Articles: Survey Statistical Accuracy DefinedThe Hidden Danger of Survey Bias, Statistical Confidence in a Survey: How Many is Enough?

Acquiescence Bias: One type of response bias where the respondent is predisposed to agree with statements presented to him or her.

Actionable data: An objective of a survey program to generate data that provides insights to organizational change initiatives. Proper questionnaire construction and question writing is needed to generate actionable data.

Articles: Generating Actionable Survey Data, What’s the Point? An Unactionable Transaction Survey

Adjective Checklist: A survey question type where the respondent is asked to select among a set of adjectives that describe something.  The question type generates categorical data.

Administration Biases: Biases introduced into the data set through the survey administration process, resulting in data that do not properly reflect the views of the target population for the research.  These biases include mode bias, selection bias, non-response (participation) bias, and response bias.

Articles: Sampling Error — And Other Reasons Polls Differ, Creating a Survey Program for a University Help Desk, What Survey Mode is Best?, Want Higher NPS & Survey Scores? Change to Telephone Survey Mode

Administrative Burden: The amount of work required to administer a survey. This work may include the effort to enter the survey in some survey tool, generating a sample, getting the invitations to the respondent, collecting data from the respondent, and transcribing the data including open-ended comments preparing it for analysis.

Article: Survey Question Types: When to Use Each

Administration Mode: The communication medium or media used to invite people to participate in the survey, present questions to them, and collect responses from them.

Articles: Tips for Selecting an Online Survey Software Tool, Impact of Mobile Surveys

Administration, Survey: The process of managing the survey process for getting responses from the selected group.  It includes selecting the audience to receive the invitations, extending the invitation, collecting responses, and loading the responses into a data set.  The administration process will vary according to the administration mode.

Ambiguity: Vague. confusing, or unclear question wording that could lead respondents to have multiple interpretations of the question.  Leads to measurement error and loss of validity.  Perhaps the most common form of instrumentation bias.

Article: The importance of Good Survey Question Wording: Even Pros Make Mistakes

Analytical Burden: The amount of work required to analyze the data generated by a survey or survey question.

Article: Survey Question Types: When to Use Each

Anchor: A word or phrase that describes a position on a response scale, thus “anchoring” the respondent to that point.  Interval rating questions generally have a set of anchors that describe ranges or levels of feelings for some dimension of measurement, such as satisfaction, agreement, or likelihood.

Articles: Misleading (or Lying) With Survey StatisticsScale Design in Surveys,

Anonymity: A concern for the survey administration process if the lack of respondent anonymity could lead to non-participation in the survey or to responses that don’t reflect the respondents true views. Related to but different from confidentiality.

Articles: An Honest Survey Invitation?

ANOVA (ANalysis Of VAriance): A statistical technique that examines the variance structures of responses between different groups to determine if those differences are statistically significant.

Attitudinal Outcome: Questions on a survey that seek to summarize a respondent’s attitudes or views of the focus of the survey.  Such questions include overall satisfaction, likelihood of recommendation, likelihood of repurchase, etc. These questions may be used as dependent variables in statistical tests.

Articles: (There Is More Than) The One Number You Need to Grow

Attribute: The characteristics of the phenomenon under study that are measured through data generated by survey questions.

Attribute Identification: The critical stage in a survey project where the attributes to be measured are identified. If poorly done, the survey will not be comprehensive and may miss key points.

Auspices: One type of response bias where the respondent is predisposed to provide responses they feel will please the sponsor of the research.

Average (Arithmetic Mean): A summary statistic to describe the typical value for a set of data from a survey question.  It is calculated for the survey question by adding the the numerical scores from each survey response and then dividing by the count of responses.

Back to Top

Survey Sample Size Calculator

Get our Excel-based calculator. It can also be used to gauge statistical accuracy after the survey has been completed.

Fill out my online form.

B — Survey Glossary

Balanced Scale: A response scale that has an even number of positive and negative response options.

Balanced Scorecard: A management approach that emphasizes having a comprehensive and “balanced” set of measurements to manage an organization.  While measures of the resource use efficiency are easy to find, measures of effectiveness in the marketplace are more challenging. Survey data are one source of effectiveness measures.

Articles: Customer Insight Metrics: The Issue of Validity, Hilton Hotel Customer Survey Program, Measuring Service Effectiveness

Bar Charts: A commonly used chart to present frequency distributions from survey data, contrasting the distinction among response options.  The bars may be presented horizontally or vertically (column bar or histogram).

Benchmarking: A management technique to compare the performance of an organization against comparable organizations.  Very difficult to do with performance measured by survey data due to the number of factors that affect survey responses.

Bias: Any phenomenon that results in survey responses not reflecting the true feelings of the respondent, making the survey findings less valid and meaningful.  Bias can never be eliminated in full, but surveyors should always try to minimize it. See: Instrumentation Bias, Administration Bias, Response Bias, Non-Response (Participation) Bias, Sample Bias, Administration Mode Bias.

Articles: Bolton Local Historic District Survey, What Survey Mode is Best?

Bimodal: A distribution of data where the values cluster around two values in the data set as opposed to clustering around one value. In a survey this would indicate two different opinions held by the respondent group.

Binary Choice: A specific type of forced choice question type where the respondent is given just two response options, such as true or false, yes or no. This question type generates categorical (nominal) data. Should be used where the responses can only legitimately be an “either or,” that is, a neutral position is not viable.  Pollsters sometimes use a variation where they present two statements from which the respondent must choose.  Statements must be polar opposites with non-loaded phrasing, otherwise bias is introduced.

Articles: Survey Question Design: Headlines or Meaningful Information?

Bivariate Statistics: Statistical analysis that examines the relationship between two variables (survey questions). Correlation is the most common type of bivariate analysis.

Bottom Box Scoring: For a scalar question, the “bottom box” is a cumulative frequency distribution for those responses at the bottom end of the scale up to some arbitrary point on the scale.  For example, in a 1-to-10 scale, the “bottom box” may be the percent of respondent scoring 1 through 6.  See also Top Box Scoring and Net Scoring.

Branching (also known as Conditional Branching and “Skip and Hit”): A technique where the flow of the questionnaire for a respondent depends upon the responses to a question.  Useful to get more detailed feedback or to skip respondents over questions that are not applicable.

Articles: Creating a Survey Program for a University Help Desk

Back to Top

C — Survey Glossary

Categorical Data (also known as Nominal Data): One of four data types. Checklist questions generate these data since respondents are selecting responses that belong to distinct categories. Analysis limited to frequency distributions.

Article: Survey Question Choice: How Question Format Affects Survey Data Analysis

CATI (Computer Aided Telephone Interviewing software): Software that manages the survey delivery process for telephone interviewers.

Articles: Tips for Selecting an Online Survey Software Tool

Census: As opposed to surveying a sample from the population, census surveying is where we invite everyone in our target population to take the survey. If everyone takes the survey, then we have conducted a census.  Otherwise, we have data from a sample and need to use sampling statistics.

Articles: Survey Sample Selection: The Need to Consider Your Whole Research Program

Central Tendency: A fancy term for the “typical value” in a data set.  Averages (mean, mode, and median) are common measures of central tendency.

Checklist Question Type (also known as Multiple Choice Question Type): The respondent is presented a set of items from which to select the most appropriate item (single response checklist) or all that apply (multiple response checklist).  Generates categorical data.

Chi Squared: A statistical test used to determine if the difference in frequency distributions between two survey data sets of categorical data is statistically significant. Can be applied to rating scale data by treating each scale point as a category, eliminating concerns about the interval properties of the data needed for a t-test.

Closed-Ended Question: As opposed to an open-ended question that generates a textual response, a closed-ended question generates a limited set of responses that can readily be coded in a data base with some number or symbol that represents a response.  Multiple-choice, ordinal, interval, and ratio questions generate closed-ended responses.

Cluster Sampling: A two-stage sampling approach used most appropriately where an interviewer is traveling to respondents and constraining travel costs is a concern.  In the first stage, several clusters (for example, cities or towns in a country) are randomly selected.  In the second stage, members from the selected clusters are randomly selected for participation.

Comparative Scales: Question types where multiple items are presented to the respondent who is asked to evaluate the items against each other, for example, forced ranking or fixed-sum question types.

Composition Effect: A non-measurement error caused by the survey administration mode.  It results from the likelihood that certain invitees will more likely respond to one administration mode than another.  This is an argument for mixed-mode surveying.

Concern: A type of response bias introduced by the phrasing of the invitation, introduction or survey questions that create a concern for privacy or some other effect on the part of the respondent.  Likely leads to non-participation or item non-response.

Articles: An Honest Survey Invitation?

Conclusion Validity: The extent to which the findings of research study seem reasonable. If the conclusions don’t seem reasonable, that may indicate some error in the research process, such as an instrumentation bias, administration bias, or statistical error.

Conditional Branching (also known as Branching and Skip & Hit): A process where the flow of a questionnaire is determined by the respondent’s answer to a question.  Useful for branching beyond not applicable questions.

Articles: Creating a Survey Program for a University Help Desk

Confidence Interval: The range or interval within which the population mean likely lies.  It indicates how well the sample mean represents the population mean. By convention, a 95% confidence interval is used, indicating that if the survey were done 20 times, 19 of the 20 times (95%) the survey mean would fall within the confidence interval, as defined by the confidence statistic. Can only be applied to survey questions that generate quantitative data (interval and ratio data).

Articles: Statistical Confidence in a Survey: How Many is Enough?

Confidence Statistic:  A statistic that defines the confidence interval.  It can be generated for data from a survey question that generates quantitative data — interval and ratio data. The statistic incorporates the count of values and variance in the data set.

Confidentiality: A promise typically conveyed in a survey invitation that data collected from a respondent will be handled appropriately and not shared with those who should not see it.  Lack of a confidential guarantee may lead to non-response or a response bias in how questions are answered. Related to but different from anonymity.

Article: HHS Hospital Quality Survey

Conformity Bias: A type of response bias where the respondent provides answers that conform to societal norms.

Continuous Scale: A scale presented as a continuous line with endpoints only.  That is, the scale does not contain discrete response options, such as a 1-to-5 scale.  The respondent is asked where they fall along the continuum with the response coded typically as a millimeter measurement. See Visual Analog Scale.

Correlation: A statistical process that examines the strength of association between two data sets, as expressed by the correlation coefficient.  It ranges from +1 (perfect positive correlation) to 0 (no correlation) to -1 (perfect negative correlation).  Assumes ratio data properties, and with truncated, integer-only data sets in interval rating scale survey data, strong correlations may be an artifact of the data.

Critical Incident Study: A process for determining the attributes to be measured through a survey questionnaire. Interviewees are asked to describe “critical” incidents that formed their opinions.  The narratives hopefully identify salient attributes to be measured for the broader group through a survey instrument.

Cumulative Frequency Distribution: For questions with ordinal properties (which includes interval and ratio questions), the percentage of respondents who chose response options up to (or down to) a specific point in the ordered scale. In other words the frequency distribution accumulates to the chosen scale point.  For example, the percentage that Agrees or Strongly Agrees with some statement. Top Box and Bottom Box scores are cumulative frequency distributions, and Net Scoring is calculated from those scores.

Customer Experience Design: The practice of designing a product or service with a primary concern to optimize the customers’ experience.

Articles: Customer Experience Design – Do our designs bring out the best or the worst in our customers?, Customer Experience Management — By Design

Customer Experience Management (CEM or CX): A management approach that emphasizes the experience of the customer in use of products and related services.  Strong focus on process design and quality execution as it affects the customer.  Surveys provide feedback to help identify processes in need of improvement.

Articles: A Very Short Customer Journey Map, “The Effortless Experience” Book Review

Customer Feedback Management (CFM): One element of a CEM program that focuses on capturing information from customers as a basis for improving product experiences as well as service processes that involve the customer.  Surveys are a common feedback method.

Article: Capturing the Value of Customer Complaints

Back to Top

D — Survey Glossary

Data Types: Four data types exist — categorical (or m=nominal), ordinal, interval, and ratio — and the analysis capabilities are greatest for ratio and lowest for categorical.  A survey question type will generate one type of data, thus defining the analysis that can be done with data from that question. Most advanced statistical tests assume ratio properties.

Articles: Survey Question Choice: How Question Format Affects Survey Data Analysis

Demographic Questions: Survey questions that identify characteristics of the respondent, such as age, gender, income level, years of education, or years as an employee or customer. Use to analyze differences across groups.

Articles: Data Collection Form Design Issues, An Honest Survey Invitation?

Descriptive Research: The second of three stages in a research program, following Exploratory and preceding Prescriptive Research. As it sounds, this research seeks to describe the phenomenon under study. Surveys are a useful tool for such research.

Descriptive Statistics: Basic statistics that are run for each survey question individually, including, mean, median, mode, standard deviation, and confidence statistic.

Discrete Scale: A response scale a series of discrete options are presented for selection by the respondent, for example, a 1-to-10 numerical and/or verbal scale.

Dispersion: Variance and Standard Deviation are statistics that measure the level of dispersion, which is how tightly clustered or data are around the mean of those data.  For a survey question dispersion indicates whether the respondent group share similar feelings or whether there is difference of views.

Double Barrelled Question: A common type of instrumentation bias where the survey question is actually composed of two separate questions.  Makes interpretation of survey data generated highly problematic.

Back to Top

E — Survey Glossary

Endpoint Anchoring (also known as Polar Anchoring): Rating scales where only the endpoints of the scale are anchored with verbal descriptors.  Can be used with discrete numerical scales or continuous scales.

Article: Scale Design in Surveys

Event Surveys (also known as Transactional or Incident Surveys): A survey program where those invited to participate are people who have just completed some transaction, for example, a training class or a hotel stay.  Useful for quality control purposes.  Complementary to periodic surveys.

Articles: Practical Points for an Event Survey, Lost in Translation: A Charming Hotel Stay with a Not-So-Charming Survey, Home Depot Customer Satisfaction Survey, Misleading (or Lying) with Survey Statistics, Complaint Identification: A Key Outcome of World Class CRM Surveys

Exploratory Research: The first of three stages of a research program, followed by Descriptive and Prescriptive Research. This stage seeks to develop a broad contextual understanding of the phenomenon of interest that can set the stage for later research.  Personal and small-group interviews (focus groups) are useful tools here.  Surveys are not a proper tool for exploratory research since the survey instrument would rely on mostly open-ended questions, which have a high respondent burden.

Back to Top

F — Survey Glossary

Face Saving Bias (also known as Prestige Bias): A type of response bias where the respondent provides answers so as to avoid embarrassment.

Fatigue: Where the cognitive demands of the survey question, especially a checklist with many items, are such that the respondent’s choice will display primacy effect, recency effect, or an item non-response. Also used to describe the “survey fatigue” from too many survey invitations.

Fixed Sum Question Type (also known as Fixed Allocation or Constant Sum): A question type where the respondent is asked to allocation points across a number of items, for example, allocating 100 points across 5 items.  Generates interval data.  Useful to measure relative importance.

Follow-Up Notices (also known as Reminders): Notices sent after the initial survey invitation to remind the sample members to take the survey.  Useful to increase accuracy and reduce participation bias.

Focus Groups (also known as Small Group Interviews): An exploratory research technique where a group of people are interviewed collectively.  Typically done in person, these may now be done using internet-enabled communications, such as online discussion groups.

Forced Choice Scale: A design option for discrete scales where the respondent is not presented with a neutral option.  Example: a 6-point rating scale with three positive and three negative options but no neutral.

Forced-Ranking Question Type (also known as Rank Order): A question type where the respondent is presented a number of items and is asked to place the items in rank order along some decision criterion.  Generates ordinal data.  Can be problematic for respondents to complete.

Fractionation Question Type: A question type where the respondent is presented with a visual line with numerical designations from zero to infinity along with some numeric point that represents “average” for the attribute being measured.  Respondent is asked to indicate where on that numeric scale his feelings lie.  Useful for measuring ongoing improvement.  Generates ratio data.

Free-Form Question Type (also known as Comments, Open-Ended, and Verbatims): Questions where the respondent is asked to provide a textual response.

Frequency Distribution: When examining the results from a survey question, the frequency distribution indicates the percentage of respondents who chose each response option.

Fully Anchored Verbal Scale: A discrete scale where every position on the scale is anchored with a verbal description. Contrast with Endpoint Anchored.

Back to Top

G — Survey Glossary

Generalizability: The ability to make inferences (to generalize) from the sample data to the population.

Guttman Scale: An ordinal scale type where the respondent is presented an ordered series of statements such that the respondent will agree with all the statements from one end of the scale up to some point.  The last agreed-to statement is an index of the respondent’s overall feelings.  Best used for hierarchical constructs.

Back to Top

Tips for Successful Survey

Request our white paper that outline vital points for an effective survey program.

Fill out my online form.

H — Survey Glossary

Hardcopy Survey Administration Mode:  Use of paper for survey invitations and data collection, such as used in postal surveys.

Headings: Short title for a topical section of the questionnaire perhaps with accompanying brief text. These help set the respondent’s mental frame for those questions.

Histograms: A graphing technique to show frequency distributions for a survey question.  Also known as vertical or column bar charts.

Horizontal Numerical Scale: A scale presentation technique for discrete, numerical scales. A horizontal line with verbal endpoint anchors and numbers for each scale point is presented to the respondent.

Back to Top

I, J — Survey Glossary

Ideographical Scale: Where the response scale is presented to the respondent using graphical symbols to represent points on the scale.  Smiley faces or other emoticons are examples of ideographic scales. Useful where language issues may exist, such as an instrument for pain measurement in an emergency room likely administered to those without command of the local language.

Importance Measurement: A survey program whose objective is Prescriptive research, requires that we learn the importance of various causes that drive the respondents’ positive or negative attitudes, such as employee or customer satisfaction.  Various question types and analytical procedures can serve this purpose.

Articles: The Importance of Measuring Importance — Correctly — on Customer Surveys

Incentives: A means to increase response rates, thus increasing statistical accuracy and reducing participation bias.  Incentives may be inducements — providing the gift with the invitation — or rewards — providing the gift only after completion of the survey.  The latter may introduce measurement error into the survey if people are “taking” the survey but not providing actual answers to the questions.

Articles: Bribes, Incentives, and Video Tape (er, Response Bias)Home Depot Customer Satisfaction Survey,

Instructions: Guidance provided to the respondent on how to complete the survey.Tendency for survey designers is to over-instruct.

Instrument, Survey (also known as Questionnaire): An instrument measures something, and a survey instrument measures respondents views on the phenomenon of interest for the research study.

Instrumentation Bias: Improper phasing of survey questions that creates a measurement error.  Examples of instrumentation bias: ambiguous phrasing,  double-barreled questions, loaded language, leading language, unrealistic recall expectations.

Articles: Sampling Error — And Other Reasons Polls Differ, Have You Met Mayor Menino?  Lots Have, HHS Hospital Quality Survey, An Example of the Impact of Question Sequencing, Bolton Local Historic District Survey, The importance of Good Survey Question Wording: Even Pros Make Mistakes

Interactive Voice Response Administration Mode: Use of an IVR (or VRU) where a recorded script is used to deliver the survey questions and the telephone keypad is used collect the numerical responses.

Articles: Automated Phone Surveys, Tips for Selecting an Online Survey Software Tool

Interval Data: One of four data types where the response items are in order (ordinal) but also where a consistent unit of measurement is applied to the scalar items.  That is, the difference between adjacent response options is equal throughout the scale.  Improper scale design will compromise the interval properties.  Researchers argue that the cognitive requirements make a truly interval scale on surveys highly unlikely.  Interval data properties are needed for many statistical procedures.  True interval data allow for addition and subtraction and arithmetic mean to be calculated; however, multiplication and division require ratio properties.

Articles: Survey Question Choice: How Question Format Affects Survey Data Analysis

Interval Rating Scale (also known as Rating Scale or Interval Scale):  A response scale where a consistent unit of measurement has been used so that the data generated possess interval properties.  Common examples are 1-to-5 or 1-to-10 scales. Note: badly designed scales are not interval, but likely ordinal.

Interviewer Bias: A measurement error introduced by the interviewer by an inconsistent presentation of survey questions to the respondent.  The effect could be unintentional or intentional.

Introduction: Brief text at the beginning of the survey instrument meant to provide necessary information for the respondent to then take the survey, including any instructions.  Should work in conjunction with text in the invitation and not be duplicative.

Invitation: A request for a member of the invitation sample to participate in the survey process.  The invitation mode may be different from the data collection mode, for example, an email invitation with a link to a webform survey or they may be the same.

Articles: An Honest Survey Invitation?, What’s the Point? An Unactionable Transaction, Communicate the Survey Findings — and Your Actions, HHS Hospital Quality Survey

Invitation Sample: Those members of the population who are selected to be asked to participate in the survey.  The invitation sample is distinct from the response sample, who are those people who actually do participate.  Both groups may be referred to as the “sample.”  Survey sample statistics are derived from the later.

Irrelevancy: One type of response bias where the respondent just provides any random answer because the survey is irrelevant to them. Offering an incentive provided only with completion of the survey may introduce this bias.

Articles: Home Depot Customer Satisfaction Survey

Item Non-Response: When the respondent does not answer certain questions — other than those that they have been branched beyond — this is considered an item non-response.  Should the respondent not complete a good number of questions, then the entire survey response should probably be removed from the data set since a strong inference exists that the respondent did not take the survey seriously.

Back to Top

K, L — Survey Glossary

Kurtosis: A statistic that describes how peaked or flat the distribution of values is in a data set.

Leading Language: A type of instrumentation bias where the phrasing of a question leads the respondent towards selecting a particular response option.  The leading language may be intentional or unintentional, but the bias compromises the survey data validity.

Article: Bolton Local Historic District Survey — a Critique

Loaded Language: Use of highly emotive language in a survey question that is meant to drive a desired response from the respondent.  Similar to leading language in effect, it creates an instrumentation bias.

Likert or Likert-Type Scale: The most widely used survey scaling approach, named for Rensis Likert, an industrial psychologist. Generally, the respondent is presented with a statement and is asked his/her level of agreement with the statement by selecting a point on the discrete scale anchored with verbal statements and frequently with numbers.  The scale should be balanced between positive and negative agreement options. Widely used in part because of its flexibility.  Properly constructed, the data can be assumed to possess interval properties though it is known that the acquiescence bias leads to respondents selecting positive agreement options.

Articles: Scale Design in Surveys

Looping: A special form of branching where the same block of questions is repeated for various items, for example, for each course the respondent took or for each department a customer visited.

Back to Top


M — Survey Glossary

Mean: One of three types of averages. How to calculate: sum all the values in a data set and divide by the count of values. A blank cell in a spreadsheet is not included in that count.

Measurement Error: The difference between the measured value of an attribute and its in fact actual value.  Various biases in instrument design and survey administration contribute to measurement error.

Median (also known as the 50th Percentile Value): One of three types of averages. Found by sorting in order all the values in a data set and then selecting the value in the middle.  If the count of values is an even number, then the median is the average of the two values on either side of the middle.

Mental Frame: Survey invitations and introductions establish a contextual framework for the respondent in which they should (or will) consider the survey questions that follow.  That is the “mental frame.” In other words, what should they be thinking about. For example, transactional surveys will set the mental frame to be a specific event that just completed typically by naming the event. Having a consistent mental frame for all respondents increases data validity. Section headings also set — and reset — the mental frame.

Mixed-Mode Administration: Use of different modes for one survey project to collect responses from different respondents, for example, offering respondents the option to take a survey by paper or webform.  Goal is to increase response; however, the mode affects the nature of the responses, introducing measurement error that complicates data interpretation.

Article: Want Higher NPS & Survey Scores? Change to Telephone Survey Mode

Mode: Regarding survey administration, the medium or method of interaction.  Regarding statistics, one of three types of averages.  In a data set, the most frequently occurring value.

Multiple Choice Question Type (also known as Checklist Questions): A question type where the respondent is asked to choose one or more items from a list of items.  Generates categorical data.

Multiple Response Checklist: A checklist or multiple choice question type where the respondent is instructed to “check all that apply” from the options presented. Instructions and edit procedures may allow the respondent to check up to a certain number or force the selection of a certain number.  This variation is useful to identify key drivers of feelings.

Article: The Importance of Measuring Importance — Correctly — on Customer Surveys

Multivariate Statistics: Analytical procedures that examine the relationship among many variables (survey questions).  Typically used to determine the causal factors that drive some outcome, such as satisfaction.

Article: Effortless Experience: Statistical Errors

Back to Top

N — Survey Glossary

Nominal Data Type (also known as Categorical): Data that results from questions with a group of response options that are not related in any order.  Only frequency distributions can be generated from such data.

Article: Survey Question Choice: How Question Format Affects Survey Data Analysis

Net Promoter Score® (NPS): A metric that has gained great prominence in customer loyalty circles, though highly controversial.  Derived from a survey question that asks likelihood of recommendation posed on a 0-to-10 scale.  Net Scoring is then applied with the percentage of scores from 0 to 6 (Detractors) subtracted from the percentage of scores from 9 to 10 (Promoters).

Articles: (There Is More Than) The One Number You Need to Grow, Want Higher NPS & Survey Scores? Change to Telephone Survey Mode, Net Promoter Score — Summary & Controversy, Net Promoter Score® Discussion Notes, Survey Programs Negative Impact on Customer Satisfaction

Net Scoring: A statistic to summarize the data set for a survey question, calculated by subtracting the percentage of respondents who provided low scores (so called, “bottom box scores”) from the percentage of respondents who provided high scores (so called, “top box scores”). Net scoring does not require the assumption of interval data properties and it provides a focus to the low end of the distribution.

Articles: (There Is More Than) The One Number You Need to Grow

Non-Probability Sampling: Sampling processes where every member of the population does not have an equal chance of being invited to participate.

Non-Response Bias (also known as Participation or Self-Selection Bias): A bias introduced into the data set by those who chose to not participate, the assumption being that those who do participate likely hold different views from those who do not.

Articles: Bolton Local Historic District Survey, The Hidden Danger of Survey Bias, Why the Polls Were Wrong — Response Bias Combined with Non-Response

Numeric Scales: Scalar questions where the respondent is presented with an ordered set of integers from which to chose, for example, 1 to 5 or 1 to 10, coupled with verbal anchors for at minimum the endpoints but also perhaps for each discrete point on the scale.  The numeric presentation is used to hopefully provide the data interval properties beyond just ordinal properties.

Back to Top

O — Survey Glossary

Open-Ended Questions (also known as Comments or Verbatims): Questions that ask for a free-form textual response from the respondent.

Ordinal Data Type: One of four data types, these data come from questions where the respondent answer indicates some order. Example: rank ordering of factors that drove a purchase decision. Only order is captured, not relative distance between items in that order.

Article: Why the Polls Were Wrong — Response Bias Combined with Non-Response

Back to Top

P, — Survey Glossary

Paired Comparison Question Type: A question type where numerous pairs of items are presented to respondents from which they are instructed to select the preferred. An ordinal ranking can be statistically derived.

Participation Bias (also known as Non-Response or Self-Selection Bias): A non-measurement error introduced into the data set that results from an uneven participation in the survey by various groups or by people with different intensity of feelings.

Articles: Bolton Local Historic District Survey, The Hidden Danger of Survey Bias, Why the Polls Were Wrong — Response Bias Combined with Non-Response

Periodic Surveys: A survey that is administered on a periodic basis, such as annually.  Sometimes called relationship surveys since the focus of the instrument is to measure the overall relationship.

Article: Home Depot Customer Satisfaction Survey, Customer Satisfaction Surveys: The Heart of a Great Loyalty Program

Pie Charts: A common charting technique useful for checklist questions, especially demographic.  However, they are especially poor for comparative analysis because the eye cannot readily compare the size of the pie slices.

Pilot Tests (also known as a Pretest): Final testing of a survey instrument. Members of the population are asked to take the survey while being observed and interviewed. Goal is to find ambiguous language and other minor imperfections that would affect instrument validity.

Article: Effortless Experience: Questionnaire Design & Survey Administration Issues, The importance of Good Survey Question Wording: Even Pros Make Mistakes

Pivot Tables: The Excel implementation of “cross tabs.” Pivot tables allow for slicing the data set along various variables and manipulating data.

Polar Anchoring (also known as Endpoint Anchoring): Rating scales where only the endpoints of the scale are anchored with verbal descriptors.  Can be used with discrete numerical scales or continuous scales.

Article: Scale Design in Surveys

Population: The group of interest for our survey research.

Population Parameters: If a census is successfully completed, meaning everyone completes the survey — an unlikely situation — then the calculated data from the responses are population parameters, not survey statistics.

Positional Checklist Question Type: An ordinal scale question type where the respondent is presented an ordered list of statements and is asked to pick the position that best represents their views.  Ones below the choice would be too negative, ones above too positive.

Postal Survey Administration Mode: An administration mode where the invitation is extended via postal letter and data capture is done via a paper survey return mailed.

Article: An Old Dog’s New Tricks: Postal Mail Surveys

Precision: If some research can be repeated numerous times with near the same result, then it is said to be precise.  Not to be confused with accuracy, though frequently used interchangeably in normal usage.

Article: Statistical Confidence in a Survey: How Many is Enough?

Prescriptive Research: After exploratory and descriptive research, a researcher can then engage in research that prescribes a course of action.  Usually involves cause-and-effect analysis.

Pretest (also known as a Pilot Test): A final testing of a survey instrument before launching the survey. Members of the population are asked to take the survey while being observed and questionnaire. Goal is to find ambiguous language and other minor imperfections that would affect instrument validity.

Article: Effortless Experience: Questionnaire Design & Survey Administration Issues

Primacy Effect: The tendency for respondents to choose the first item presented for them to consider.  Especially prominent in telephone surveys due to the need for the respondent to remember the list.  Frequently seen along with recency effect.

Probability Sampling: Sampling processes where every member of the population has an equal chance of being invited to take the survey.

Progress Indicators: Some visual indicator, typically a graph, that shows the respondent how far along they are in taking the survey.  Purpose is to reduce survey abandonment.

Purposive Sampling: A non-probability sampling process where individuals are specifically selected to participate in research to provide a range of opinion.  Frequently used for focus groups.

Push Polling: A survey or poll where the objective is to push information to the respondent as opposed to gathering information from the respondent regarding their views or attitudes.  Commonly done in politics to create negativity for an opponent, but may also be used in other surveys to create awareness of some product.  Use of the term “aware” in questions is a good indicator.

Article: Bolton Local Historic District Survey — a Critique

Back to Top

Q — Survey Glossary

Question: A measuring tool to generate data that will measure some attribute (or characteristic) of interest in the research study.  The wording in the question operationalizes the attribute so that the respondent can present their views through some measurement scheme, such as using a scale or checkbox. Various question phrasing and question types could be used to measure the same underlying attribute.

Question Types: Broadly speaking, four question types exist corresponding to the type of data each generates: categorical (or nominal), ordinal, interval, and ratio. Each question type has multiple question formats.

Articles: Survey Question Types: When to Use Each, Survey Question Choice: How Question Format Affects Survey Data Analysis

Questionnaire (also known as Survey Instrument): An instrument to measure how some group of interest feels on some subject.

Quota Sampling: A non-probability sampling process where individuals from different demographic groups are invited to participate until a quota is reached, typically to reflect the group’s relative percentage in the general population.

Back to Top

R — Survey Glossary

Random Error: In any measurement system, including a survey, some variation in scoring is expected and normal.  This normal variation is considered random error in contrast to systematic (or abnormal) error.

Random Sampling: A probability sampling process where every member of the population has an equal chance of being invited to take the survey.

Randomization: To avoid sequencing effects, the order of a series of questions may be randomized.  Also, the order of response options in a checklist question may be randomized to avoid primacy effects.

Rank, Ranking, Rank Order: When we ask the respondent to specify the order of some items, that is a ranking process.  This generates ordinal data.  Note difference from “rate” — and they are frequently confused.

Rate, Rating: The result when we ask the respondent to specify how they feel about something using a rating scale.  Note difference from “rank” — and they are frequently confused.

Rating Scale (also known as Interval Rating Scale):  A response scale where the response options have interval properties.  That is, a consistent unit of measurement is present so that the “distance” between response options is the same.

Ratio Data: The highest level data type found typically in physical measures such as length, weight, headcount.  A ratio scale has a true zero (as opposed to an arbitrary zero on some scale), meaning zero quantity of that item, such as income.  Most, but not all, demographic questions are ratio data though the data may be solicited in an ordered checklist. All mathematical operations can be performed on ratio data including multiplication and division. Many multivariate analysis assume ratio data properties.

Article: Survey Question Choice: How Question Format Affects Survey Data Analysis

Recall Bias: A type of instrumentation bias regarding unjustifiable memory expectations on the part of the respondent.  Validity of responses is in doubt.

Recency Effect: The tendency for a respondent to provide as a response the last response option encountered.  Especially prominent in telephone surveys.

Reliability or Reproduceability: A critical concept for any research effort. If the research can be reproduced by other researchers, then the findings are said to be “reliable” and thus have credibility.  Lacking reproduceability, the findings should be viewed skeptically. Note difference from “validity.”

Articles: The Effortless Experience Book Review, Effortless Experience: Questionnaire Design & Survey Administration Issues, Customer Insight Metrics: The Issue of Validity

Reminders (also known as Follow-Ups): A notice communicated to a potential respondent to remind them of the invitation to take the survey.  Useful to increase statistical accuracy and to reduce participation bias.

Respondent Burden: The amount of work or “burden” required of the respondent to complete the survey. Higher burden will lower response rates with associated non-response bias — and perhaps create a response bias.

Article: The Poetry of Surveys: A Respondent’s Survey Design Lessons

Response Bias: A bias that the respondent brings to the surveying process that may be activated by the administration process or by the survey instrument wording.  The bias can lead to non-participation — the respondent not taking or quitting the survey –, item non-response, or measurement error as the respondent provides non-accurate responses.  Examples are: acquiescence (yes saying), auspices, face saving (prestige), conformity, concern, and irrelevance.

Articles: Bribes, Incentives, and Video Tape (er, Response Bias), What’s the Point? An Unactionable Transaction Survey, Have You Met Mayor Menino? Lots Have, An Honest Survey Invitation?, Why the Polls Were Wrong — Response Bias Combined with Non-Response

Response Rate: The number of people who complete the survey divided into the number who actually receive the invitation.

Response Sample: The sample data from those who complete the survey. A subset of the invitation sample, though “sample” is applied to both groups.

Response Set: The collection of items from which the respondent is asked to select his/her response.

Reverse Coded Questions: To avoid respondents falling into a routine, a few rating scale questions are posed in the “reverse,” meaning the positive statement will be the opposite end of the scale from previous questions.  These should be used early in the questionnaire before routine has been established. In the data set reverse coded questions should be “unreversed” to each interpretation of results.

Article: Bolton Local Historic District Survey — a Critique

Routine: A response effect where respondents start scoring a series of rating scale questions with the same response without really reading the question and considering their true feelings.  Introduces measurement error into the data.

Back to Top

S — Survey Glossary

Sample, Sampling: A subset of the population that is selected to receive invitations to participate in the survey.  Sampling may be either probabilistic (every member having an equal chance of selection) or non-probabilistic.

Article: Survey Statistical Accuracy Defined

Sample Bias: Results when creation of the sampling frame excludes a certain type of person, leading to a bias in the sample selected and thus to the respondent sample.  For example, tasking contact center agents to invite customers to participate in a survey of their experience likely will mean that interactions that did not go well would not receive the invitation.  Very similar to a selection bias.

Article: Creating a Survey Program for a University Help Desk, What Survey Mode is Best?, Impact of Mobile Surveys — Tips for Best Practice

Sample Size Equation: The number of people to include in an Invitation Sample divided by the likely Response Rate.  The resulting sample size should lead to the desired statistical accuracy.

Articles: Survey Sample Size Calculator, Statistical Confidence in a Survey: How Many is Enough?

Sample Statistics: Calculated statistics from the Respondent Sample which are presented as indications of the Population Parameters.  In other words, the sample statistics are indications of how the entire group of interest would have responded, with some degree of sampling error — and probably some degree of sample and selection bias.

Sampling Error: The error introduced into our survey results by the fact that the data are from a sample rather than from the entire population. The difference between the sample mean and the population mean results from sampling error. Statistical accuracy provides an indication of the level of sampling error.

Sampling Frame: A subset of the population from which the sample is drawn.  Sampling frames are used where it is not possible or impractical to draw the sample from the entire population.  For example, if we do not have contact information for some people in the population, then the sampling frame would exclude those names lacking contact information.

Article: Caveat Survey Dolor: “Show Me the Questionnaire”

Scales: In common survey usage, a scale is an ordered series of response options, presented verbally, numerically, or ideographically, from which the respondents select to indicate their level of feeling about the attribute being measured.  More properly a scale is a composite score of a number of survey questions that each measure the same attribute.  For example, a final exam for a class is a scaled score from multiple questions that measure the students knowledge of the subject matter.

Articles: Survey Question Choice: How Question Format Affects Survey Data Analysis, Scale Design in Surveys: The Impact on Performance Measurement Systems — and the Value of Dispersion

Scatter Plots: A visual depiction of the correlation between two variables found by plotting one variable on the X and another on the Y axis.

School-Grade Scale: A response scale that uses school grades — A, B, C, D, and F — as the response options.  The scale is culturally dependent.

Section Headings: Short headings perhaps with accompanying brief text that lead into a section of a survey.  These help set the respondent’s mental frame for those questions.

Segmentation Analysis: In addition to analyzing the survey data set as a whole, analysis is typically desired for specific segments of the population found by slicing the data set along demographic variables. Common segments are: by income level, by gender, by length of relationship (customer, employee, etc.), and frequency of experience some phenomenon.

Selection Bias: Results when some members of the sampling frame are less likely to participate in the survey due to the way the sample is generated.  For example, a webform survey designed for use on a laptop only — not on a mobile device (smartphone) — may result in people who primarily use a mobile device not participating.  Very similar to a sample bias.

Articles: Impact of Mobile Surveys — Tips for Best Practice, What Survey Mode is Best?, Creating a Survey Program for a University Help Desk

Self-Selection Bias (also known as Non-Response or Participation Bias): Occurs when members of the Invitation Sample who share some common characteristics choose not to participate in the survey, resulting in a biased Response Sample.

Articles: Bolton Local Historic District Survey, The Hidden Danger of Survey Bias, Why the Polls Were Wrong — Response Bias Combined with Non-Response

Semantic Differentiation Question: A question type where the respondent is presented a multi-point scale in a horizontal numerical structure where each endpoint is verbal anchored with antonyms.  Few webform tools support this question type.  Very useful in handling reverse coded questions.

Semi-Structured Questionnaire: For focus groups and interviews a fluid questionnaire used to guide the discussion.  Good moderator skills needed to apply such a questionnaire.

Sequencing: A response effect where the answer to one question affects the respondent’s interpretation of subsequent questions.  Also seen for the sequencing of items in a checklist question.  Question randomization can mitigate the effect.

Articles: Survey Question Design: Headlines or Meaningful Information?, Money Grows on Trees — If You Believe the Polls, An Example of the Impact of Question Sequencing

Service Recovery: An action program to address issues that customers have experienced to recoup them as customers or to mitigate negative word of mouth.  Transactional surveys serve to identify customers in need of a service recovery act.

Articles: Capturing the Value of Customer Complaints, Service Recovery at United Airlines, Complaint Identification: A Key Outcome of World Class CRM Surveys, Service Recovery Turned Sour: Keep the Lawyers from Turning Fairness Foul, Lessons (that should have been learned) from Service Recovery, A Sporting Service Recovery, Communicate the Survey Findings — and Your Actions, Survey Project Resource Requirements, Sears IVR Customer Satisfaction Survey, Hilton Hotel Customer Survey Program, Swisscom Pocket Connect Survey Design Review

Skip and Hit (also known as Branching and Conditional Branching): Where the question path a respondent follows is based upon the response to a particular question.  Useful for avoiding not applicable questions.

Articles: Creating a Survey Program for a University Help Desk

Stacked Bar Charts: Vertical or horizontal bar charts that display frequency distributions from a survey question.  Best used when there’s a limited number of response options for the question.

Standard Deviation: Square root of variance.  A measure of the dispersion of values in the data set around the mean.

Statistic: A number meant to represent some characteristic of a larger data set.

Statistical Confidence: Commonly used interchangeably with statistical accuracy, it tells us how well a statistic from a sampling process represents the population mean for the data set.

Stratified Random Sampling: A variation of random sampling to generate the invitation sample used to help ensure a consistent statistical accuracy across demographic segments of comparative interest, for example, product line, business region, school district.  Sampling here is a two stage process.  First, stratify (group) the members of the sample frame by the demographic variable.  Second, random sample within each stratum.

Survey: The process of conducting research using survey methodology, though in common parlance “survey” is used to mean the survey instrument or questionnaire.

Survey Administration:  The process of inviting people to take the survey and collect data from them.  Care must be taken to avoid introducing administration biases into the data set. See “Administration” for other related items.

Survey Instrument (also known as Questionnaire): The measuring instrument used to gauge how some group feels on the topic of the research study.

Survey Fatigue: When members of the population become so weary of repeated survey invitations that they stop taking the survey, except perhaps when they have an issue.  Can introduce a participation bias (a.k.a. non-response or self-selection bias). With more organizations surveying, fatigue has increased.

Article: Survey Sample Selection: The Need to Consider Your Whole Research Program, Battling Survey Fatigue, The Hidden Danger of Survey Bias

Survey Question: A measuring tool to generate data that will measure some attribute (or characteristic) of interest in the research study.  The wording in the question operationalizes the attribute so that the respondent can present their views through some measurement scheme, such as a scale or checkbox. Various question phrasing and question types could be used to measure the same underlying attribute.

Articles: Survey Question Choice: How Question Format Affects Survey Data Analysis

Systematic Sampling: One of the four probabilistic sampling approaches, generally used when the sampling frame is in the form of a list or maybe people queued in a line.  Names would be selected at an interval (say, every 8th name) going through the entire list.

Back to Top

T — Survey Glossary

Telephone Surveying: Use of telephone medium to invite and administer a survey.

Tests of Independence: Statistical tests to determine if two groups of categorical data are from the same underlying population or whether the differences are statistically significant.

Textual Data: Free-form textual responses provided in open-ended questions.

Top Box Scoring: For a scalar question, the “top box” is a cumulative frequency distribution for those responses at the top end of the scale down to some arbitrary point on the scale.  For example, in a 1-to-10 scale, the “top box” may be the percent of respondent scoring 9 through 10.  See also Bottom Box Scoring and Net Scoring.

Trade-Off Analysis: An approach used in surveys to identify key drivers of respondents’ actions or feelings by asking them to examine multiple items simultaneously and score the items to indicate relative importance.  Forced Ranking and Fixed Sum Questions Types are commonly used.

Transactional Surveys (also known as Event or Incident Surveys): A survey program where those invited to participate are people who have just completed some transaction, for example, a training class or a hotel stay.  Useful for quality control purposes.  Complementary to periodic surveys.

Articles: Practical Points for an Event Survey, Lost in Translation: A Charming Hotel Stay with a Not-So-Charming Survey, Home Depot Customer Satisfaction Survey, Misleading (or Lying) with Survey Statistics, Complaint Identification: A Key Outcome of World Class CRM Surveys

Truncated Scales: Where the scale presentation causes the respondent to not consider the entire breadth of the response options, the scale is considered truncated.  For example, checklist questions with too many items lead to primacy and recency truncation effects.  Numerical scales presented in oral telephone administration mode are observed to result in a truncated scale since the respondent typically hears “on a scale from 1 to 10” leading them to chose the endpoints 1 or 10 more frequently than a scale presented visually on paper or webform.

Article: Want Higher NPS & Survey Scores? Change to Telephone Survey Mode

t Test: A statistical test that determines whether two data sets are from the same underlying population or not. Used to determine whether the difference between two survey scores is a statistically significant difference or whether the difference is simply due to sampling error.  Requires interval data properties.

Back to Top

U, V — Survey Glossary

Unit of Analysis: In a research study, the level at which the analysis is focused. In a business-to-business survey where we might survey multiple contact people in a client company, we might be concerned about individual responses or a client company’s responses. In a municipal survey, our focus might be on the individual or on households.

Validity: A valid survey question is one that measures what it purports to measure.  Various types of instrumentation bias can compromise validity. Note: various types of validity exist in the research world: construct, content, conclusion, face, internal, external, criterion.

Articles: The Effortless Experience Book Review, Effortless Experience: Questionnaire Design & Survey Administration Issues, Customer Insight Metrics: The Issue of Validity

Variance: A measure of the spread of data in a data set, that is, how tightly or loosely clustered the responses for a question are around the mean score.

Verbal Scale: Where the response options are presented to the respondent using words, whether spoken or written.

Verbatims (also known as Comments or Open-Ended Questions): Free-form textual responses to open-ended questions.  The term derives from interviewers capturing a respondent’s comments “verbatim,” but it is now a term used even for comments the respondent types into a webform.

Visual Analog Scale (VAS): A response scale where the respondent is presented a continuous (analog) line with only the endpoints anchored. It is considered to generate data with better interval properties.  The pain scale used in hospitals is termed a VAS though in practice it is a discrete scale with numerical, verbal, and ideographic presentation of response options.

Back to Top

W, X, Y, Z — Survey Glossary

Webform Surveying: use of webforms to administer a survey to respondents and collect data.  Invitations are commonly extended via email, but other media could be used to extend the invitation.

We have not identified any survey terms for letters X, Y, or Z.  Help us out!

Back to Top