Survey methodology

A field of applied statistics of human research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology includes instruments or procedures that ask one or more questions that may or may not be answered.

Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied, and such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses are all examples of quantitative research that use survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, e.g., marketing research, psychology, health-care provision and sociology.

Overview

A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent.

Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.[1]

The most important methodological challenges of a survey methodologist include making decisions on how to:[1]

  • Identify and select potential sample members.
  • Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond)
  • Evaluate and test questions.
  • Select the mode for posing questions and collecting responses.
  • Train and supervise interviewers (if they are involved).
  • Check data files for accuracy and internal consistency.
  • Adjust survey estimates to correct for identified errors.

Selecting samples

The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest.[2] The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis.

Modes of data collection

There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including

  1. costs,
  2. coverage of the target population,
  3. flexibility of asking questions,
  4. respondents' willingness to participate and
  5. response accuracy.

Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:[3]

  • Telephone
  • Mail (post)
  • Online surveys
  • Personal in-home surveys
  • Personal mall or street intercept survey
  • Hybrids of the above.

Research designs

There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies.[2]

Cross-sectional studies

In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once.[2] A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design.

Successive independent samples studies

A successive independent samples design draws multiple random samples from a population at one or more times.[2] This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly.

Longitudinal studies

Longitudinal studies take measure of the same random sample at multiple time points.[2] Unlike with a successive independent samples design, this design measures the differences in individual participants’ responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents’ experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally. However, longitudinal studies are both expensive and difficult to do. It's harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. This attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers.

Questionnaires

Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately.[2] Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate.[2]

Questionnaires as tools

A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample.[2] Demographic variables include such measures as ethnicity, socioeconomic status, race, and age.[2] Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale.[2] Self-report scales are also used to examine the disparities among people on scale items.[2] These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid.[2]

Reliability and validity of self-report measures

Reliable measures of self-report are defined by their consistency.[2] Thus, a reliable self-report measure produces consistent results every time it is executed.[2] A test's reliability can be measured a few ways.[2] First, one can calculate a test-retest reliability.[2] A test-retest reliability entails conducting the same questionnaire to a large sample at two different times.[2] For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest.[2] Self-report measures will generally be more reliable when they have many items measuring a construct.[2] Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested.[2] Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment.[2] Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure.[2] Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure.[2]

It is important to note that there is evidence to suggest that self-report measures tend to be less accurate and reliable than alternative methods of assessing data (e.g. observational studies; for an example, see.[4]

Composing a questionnaire

Six steps can be employed to construct a questionnaire that will produce reliable and valid results.[2] First, one must decide what kind of information should be collected.[2] Second, one must decide how to conduct the questionnaire.[2] Thirdly, one must construct a first draft of the questionnaire.[2] Fourth, the questionnaire should be revised.[2] Next, the questionnaire should be pretested.[2] Finally, the questionnaire should be edited and the procedures for its use should be specified.[2]

Guidelines for the effective wording of questions

The way that a question is phrased can have a large impact on how a research participant will answer the question.[2] Thus, survey researchers must be conscious of their wording when writing survey questions.[2] It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another.[2] There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions.[2] Free response questions are open-ended, whereas closed questions are usually multiple choice.[2] Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding.[2] Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder.[2] In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words.[2] Each question should be edited for "readability" and should avoid leading or loaded questions.[2] Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias.[2]

A respondent's answer to an open-ended question can be coded into a response scale afterwards,[3] or analysed using more qualitative methods.

Order of questions

Survey researchers should carefully construct the order of questions in a questionnaire.[2] For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end.[2] Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence.[2] Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming.

Nonresponse reduction

The following ways have been recommended for reducing nonresponse[5] in telephone and face-to-face surveys:[6]

  • Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
  • Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
  • Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate.[7]
  • Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.

Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.[8] A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions).[9] Other studies showed that quality of response degraded toward the end of long surveys.[10]

Interviewer effects

Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race,[11] gender,[12] and relative body weight (BMI).[13] These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes,[14] interviewer sex responses to questions involving gender issues,[15] and interviewer BMI answers to eating and dieting-related questions.[16] While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects.

See also

References

  1. ^ a b Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN 978-1-118-21134-2.
  2. ^ a b c d e f g h i j k l m n o p q r s t u v w x y z aa ab ac ad ae af ag ah ai aj ak al am an ao ap aq Shaughnessy, J.; Zechmeister, E.; Jeanne, Z. (2011). Research methods in psychology (9th ed.). New York, NY: McGraw Hill. pp. 161–175.
  3. ^ a b Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.
  4. ^ Prince et al., 2008)
  5. ^ Lynn, P. (2008) "The problem of non-response", chapter 3, 35-55, in International Handbook of Survey Methodology (ed.s Edith de Leeuw, Joop Hox & Don A. Dillman). Erlbaum. ISBN 0-8058-5753-2
  6. ^ Dillman, D.A. (1978) Mail and telephone surveys: The total design method. Wiley. ISBN 0-471-21555-4
  7. ^ De Leeuw, E.D. (2001). "I am not selling anything: Experiments in telephone introductions". Kwantitatieve Methoden, 22, 41–48.
  8. ^ Bogen, Karen (1996). "THE EFFECT OF QUESTIONNAIRE LENGTH ON RESPONSE RATES -- A REVIEW OF THE LITERATURE" (PDF). Proceedings of the Section on Survey Research Methods. American Statistical Association: 1020–1025. Retrieved 2013-03-19.
  9. ^ "Does Adding One More Question Impact Survey Completion Rate?". 2010-12-10. Retrieved 2017-11-08.
  10. ^ "Respondent engagement and survey length: the long and the short of it". research. April 7, 2010. Retrieved 2013-10-03.
  11. ^ Hill, M.E (2002). "Race of the interviewer and perception of skin color: Evidence from the multi-city study of urban inequality". American Sociological Review. 67 (1): 99–108. doi:10.2307/3088935. JSTOR 3088935.
  12. ^ Flores-Macias, F.; Lawson, C. (2008). "Effects of interviewer gender on survey responses: Findings from a household survey in Mexico". International Journal of Public Opinion Research. 20 (1): 100–110. doi:10.1093/ijpor/edn007.
  13. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.; Van Strien, T. (2011). "BMI of interviewer effects". International Journal of Public Opinion Research. 23 (4): 530–543. doi:10.1093/ijpor/edr026.
  14. ^ Anderson, B.A.; Silver, B.D.; Abramson, P.R. (1988). "The effects of the race of the interviewer on race-related attitudes of black respondents in SRC/CPS national election studies". Public Opinion Quarterly. 52 (3): 1–28. doi:10.1086/269108.
  15. ^ Kane, E.W.; MacAulay, L.J. (1993). "Interviewer gender and gender attitudes". Public Opinion Quarterly. 57 (1): 1–28. doi:10.1086/269352.
  16. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B. (2011). "Interviewer BMI effects on under- and over-reporting of restrained eating. Evidence from a national Dutch face-to-face survey and a postal follow-up". International Journal of Public Health. 57 (3): 643–647. doi:10.1007/s00038-011-0323-z. PMC 3359459. PMID 22116390.

Further reading

  • Abramson, J.J. and Abramson, Z.H. (1999). Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences ISBN 0-443-06163-7
  • Adèr, H. J., Mellenbergh, G. J., and Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
  • Andres, Lesley (2012). "Designing and Doing Survey Research". London: Sage.
  • Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley. ISBN 0-471-21555-4
  • Engel. U., Jann, B., Lynn, P., Scherpenzeel, A. and Sturgis, P. (2014). Improving Survey Methods: Lessons from Recent Research. New York: Routledge. ISBN 978-0-415-81762-2
  • Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBN 0-471-61171-9
  • Griffith, James. (2014) "Survey Research in Military Settings." in Routledge Handbook of Research Methods in Military Studies edited by Joseph Soeters, Patricia Shields and Sebastiaan Rietjens.pp. 179–193. New York: Routledge.
  • Leung, Wai-Ching (2001) "Conducting a Survey", in Student BMJ, (British Medical Journal, Student Edition), May 2001
  • Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
  • Prince, S. a, Adamo, K. B., Hamel, M., Hardt, J., Connor Gorber, S., & Tremblay, M. (2008). A comparison of direct versus self-report measures for assessing physical activity in adults: a systematic review. International Journal of Behavioral Nutrition and Physical Activity, 5(1), 56. http://doi.org/10.1186/1479-5868-5-56
  • Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. ISBN 0-07-111655-9 (pp. 143–192)
  • Singh, S. (2003). Advanced Sampling Theory with Applications: How Michael Selected Amy. Kluwer Academic Publishers, The Netherlands.
  • Soeters, Joseph; Shields, Patricia and Rietjens, Sebastiaan.(2014). Routledge Handbook of Research Methods in Military Studies New York: Routledge.
  • Surveys at Curlie

External links

Census

A census is the procedure of systematically acquiring and recording information about the members of a given population. This term is used mostly in connection with national population and housing censuses; other common censuses include agriculture, business, and traffic censuses. The United Nations defines the essential features of population and housing censuses as "individual enumeration, universality within a defined territory, simultaneity and defined periodicity", and recommends that population censuses be taken at least every 10 years. United Nations recommendations also cover census topics to be collected, official definitions, classifications and other useful information to co-ordinate international practice.The word is of Latin origin: during the Roman Republic, the census was a list that kept track of all adult males fit for military service. The modern census is essential to international comparisons of any kind of statistics, and censuses collect data on many attributes of a population, not just how many people there are. Censuses typically began as the only method of collecting national demographic data, and are now part of a larger system of different surveys. Although population estimates remain an important function of a census, including exactly the geographic distribution of the population, statistics can be produced about combinations of attributes e.g. education by age and sex in different regions. Current administrative data systems allow for other approaches to enumeration with the same level of detail but raise concerns about privacy and the possibility of biasing estimates.A census can be contrasted with sampling in which information is obtained only from a subset of a population; typically main population estimates are updated by such intercensal estimates. Modern census data are commonly used for research, business marketing, and planning, and as a baseline for designing sample surveys by providing a sampling frame such as an address register. Census counts are necessary to adjust samples to be representative of a population by weighting them as is common in opinion polling. Similarly, stratification requires knowledge of the relative sizes of different population strata which can be derived from census enumerations. In some countries, the census provides the official counts used to apportion the number of elected representatives to regions (sometimes controversially – e.g., Utah v. Evans). In many cases, a carefully chosen random sample can provide more accurate information than attempts to get a population census.

Computer-assisted telephone interviewing

Computer-assisted telephone interviewing (CATI) is a telephone surveying technique in which the interviewer follows a script provided by a software application. It is a structured system of microdata collection by telephone that speeds up the collection and editing of microdata and also permits the interviewer to educate the respondents on the importance of timely and accurate data. The software is able to customize the flow of the questionnaire based on the answers provided, as well as information already known about the participant. It is used in B2B services and corporate sales.

CATI may function in the following manner:

A computerized questionnaire is administered to respondents over the telephone.

The interviewer sits in front of a computer screen.

Upon command, the computer dials the telephone number to be called.

When contact is made, the interviewer reads the questions posed on the computer screen and records the respondent's answers directly into the computer.

Interim and update reports can be compiled instantaneously, as the data are being collected.

CATI software has built-in logic, which also enhances data accuracy.

The program will personalize questions and control for logically incorrect answers, such as percentage answers that do not add up to 100 percent.

The software has built-in branching logic, which will skip questions that are not applicable or will probe for more detail when warranted.

Automated dialers are usually deployed to lower the waiting time for the interviewer, as well as to record the interview for quality purposes.

Data collection

Data collection is the process of gathering and measuring information on targeted variables in an established system, which then enables one to answer relevant questions and evaluate outcomes. Data collection is a component of research in all fields of study including physical and social sciences, humanities, and business. While methods vary by discipline, the emphasis on ensuring accurate and honest collection remains the same. The goal for all data collection is to capture quality evidence that allows analysis to lead to the formulation of convincing and credible answers to the questions that have been posed.

Data editing

Data editing is defined as the process involving the review and adjustment of collected survey data. The purpose is to control the quality of the collected data. Data editing can be performed manually, with the assistance of a computer or a combination of both.

Interview

An interview is a conversation where questions are asked and answers are given. In common parlance, the word "interview" refers to a one-on-one conversation between an interviewer and an interviewee. The interviewer asks questions to which the interviewee responds, usually so information may be transferred from interviewee to interviewer (and any other audience of the interview). Sometimes, information can be transferred in both directions. It is a communication, unlike a speech, which produces a one-way flow of information.

Interviews usually take place face-to-face and in person, although modern communications technologies such as the Internet have enabled conversations to happen in which parties are separated geographically, such as with videoconferencing software, and telephone interviews can happen without visual contact. Interviews almost always involve spoken conversation between two or more parties, although in some instances a "conversation" can happen between two persons who type questions and answers back and forth.

Interviews can range from Unstructured interview or free-wheeling and open-ended conversations in which there is no predetermined plan with prearranged questions, to highly structured conversations in which specific questions occur in a specified order. They can follow diverse formats; for example, in a ladder interview, a respondent's answers typically guide subsequent interviews, with the object being to explore a respondent's subconscious motives. Typically the interviewer has some way of recording the information that is gleaned from the interviewee, often by writing with a pencil and paper, sometimes transcribing with a video or audio recorder, depending on the context and extent of information and the length of the interview. Interviews have a duration in time, in the sense that the interview has a beginning and an ending.

The traditional two-person interview format, sometimes called a one-on-one interview, permits direct questions and followups, which enables an interviewer to better gauge the accuracy of responses. It is a flexible arrangement in the sense that subsequent questions can be tailored to clarify earlier answers. Further, it eliminates any possible distortion by having third parties present.

Face to face interviewing makes it easier for people to interact and form a connection, and it helps both the potential employer and potential hire who they might be interacting with. Further, face to face interview sessions can be more enjoyable.

Interview (research)

An interview in qualitative research is a conversation where questions are asked to elicit information. The interviewer is usually a professional or paid researcher, sometimes trained, who poses questions to the interviewee, in an alternating series of usually brief questions and answers. They can be contrasted with focus groups in which an interviewer questions a group of people and observes the resulting conversation between interviewees, or surveys which are more anonymous and limit respondents to a range of predetermined answer choices. In phenomenological or ethnographic research, interviews are used to uncover the meanings of central themes in the life world of the subjects from their own point of view.

Paid survey

A paid or incentivized survey is a type of statistical survey where the participants/members are rewarded through an incentive program, generally entry into a sweepstakes program or a small cash reward, for completing one or more surveys.

Participation bias

Participation bias or non-response bias is a phenomenon in which the results of elections, studies, polls, etc. become non-representative because the participants disproportionately possess certain traits which affect the outcome. These traits mean the sample is systematically different from the target population, potentially resulting in biased estimates.For instance, a study found that those who refused to answer a survey on AIDS tended to be "older, attend church more often, are less likely to believe in the confidentiality of surveys, and have lower sexual self disclosure." It may occur due to several factors as outlined in Deming (1990).Non-response bias can be a problem in longitudinal research due to attrition during the study.

Political forecasting

Political forecasting aims at predicting the outcome of elections.

Respondent

A respondent is a person who is called upon to issue a response to a communication made by another. The term is used in legal contexts, in survey methodology, and in psychological conditioning.

Sampling (statistics)

In statistics, quality assurance, and survey methodology, sampling is the selection of a subset (a statistical sample) of individuals from within a statistical population to estimate characteristics of the whole population. Statisticians attempt for the samples to represent the population in question. Two advantages of sampling are lower cost and faster data collection than measuring the entire population.

Each observation measures one or more properties (such as weight, location, colour) of observable bodies distinguished as independent objects or individuals. In survey sampling, weights can be applied to the data to adjust for the sample design, particularly in stratified sampling. Results from probability theory and statistical theory are employed to guide the practice. In business and medical research, sampling is widely used for gathering information about a population. Acceptance sampling is used to determine if a production lot of material meets the governing specifications.

Self-report study

A self-report study is a type of survey, questionnaire, or poll in which respondents read the question and select a response by themselves without researcher interference. A self-report is any method which involves asking a participant about their feelings, attitudes, beliefs and so on. Examples of self-reports are questionnaires and interviews; self-reports are often used as a way of gaining participants' responses in observational studies and experiments.

Self-report studies have validity problems. Patients may exaggerate symptoms in order to make their situation seem worse, or they may under-report the severity or frequency of symptoms in order to minimize their problems. Patients might also simply be mistaken or misremember the material covered by the survey. However, some individuals can be trustworthy when fulfilling these self report surveys.

Social desirability bias

In social science research, social desirability bias is a type of response bias that is the tendency of survey respondents to answer questions in a manner that will be viewed favorably by others. It can take the form of over-reporting "good behavior" or under-reporting "bad," or undesirable behavior. The tendency poses a serious problem with conducting research with self-reports, especially questionnaires. This bias interferes with the interpretation of average tendencies as well as individual differences.

Topics where socially desirable responding (SDR) is of special concern are self-reports of abilities, personality, sexual behavior, and drug use. When confronted with the question "How often do you masturbate?," for example, respondents may be pressured by the societal taboo against masturbation, and either under-report the frequency or avoid answering the question. Therefore, the mean rates of masturbation derived from self-report surveys are likely to be severe underestimates.

When confronted with the question, "Do you use drugs/illicit substances?" the respondent may be influenced by the fact that controlled substances, including the more commonly used marijuana, are generally illegal. Respondents may feel pressured to deny any drug use or rationalize it, e.g. "I only smoke marijuana when my friends are around." The bias can also influence reports of number of sexual partners. In fact, the bias may operate in opposite directions for different subgroups: Whereas men tend to inflate the numbers, women tend to underestimate theirs. In either case, the mean reports from both groups are likely to be distorted by social desirability bias.

Other topics that are sensitive to social desirability bias include:

Self-reported personality traits will correlate strongly with social desirability bias

Personal income and earnings, often inflated when low and deflated when high

Feelings of low self-worth and/or powerlessness, often denied

Excretory functions, often approached uncomfortably, if discussed at all

Compliance with medicinal dosing schedules, often inflated

Religion, often either avoided or uncomfortably approached

Patriotism, either inflated or, if denied, done so with a fear of other party's judgment

Bigotry and intolerance, often denied, even if it exists within the responder

Intellectual achievements, often inflated

Physical appearance, either inflated or deflated

Acts of real or imagined physical violence, often denied

Indicators of charity or "benevolence," often inflated

Illegal acts, often denied

Voter turnout

Survey (human research)

In research of human subjects, a survey is a list of questions aimed at extracting specific data from a particular group of people. Surveys may be conducted by phone, mail, via the internet, and sometimes face-to-face on busy street corners or in malls. Surveys are used to increase knowledge in fields such as social research and demography.

Survey research is often used to assess thoughts, opinions, and feelings. Surveys can be specific and limited, or they can have more global, widespread goals. Psychologists and sociologists often use surveys to analyze behavior, while it is also used to meet the more pragmatic needs of the media, such as, in evaluating political candidates, public health officials, professional organizations, and advertising and marketing directors. A survey consists of a predetermined set of questions that is given to a sample. With a representative sample, that is, one that is representative of the larger population of interest, one can describe the attitudes of the population from which the sample was drawn. Further, one can compare the attitudes of different populations as well as look for changes in attitudes over time. A good sample selection is key as it allows one to generalize the findings from the sample to the population, which is the whole purpose of survey research.

Survey data collection

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing, and official statistics. The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys (CASI, CSAQ) are increasingly replaced by web surveys.

Survey sampling

In statistics, survey sampling describes the process of selecting a sample of elements from a target population to conduct a survey.

The term "survey" may refer to many different types or techniques of observation. In survey sampling it most often involves a questionnaire used to measure the characteristics and/or attitudes of people. Different ways of contacting members of a sample once they have been selected is the subject of survey data collection. The purpose of sampling is to reduce the cost and/or the amount of work that it would take to survey the entire target population. A survey that measures the entire target population is called a census. A sample refers to a group or section of a population from which information is to be obtained

Survey samples can be broadly divided into two types: probability samples and super samples. Probability-based samples implement a sampling plan with specified probabilities (perhaps adapted probabilities specified by an adaptive procedure). Probability-based sampling allows design-based inference about the target population. The inferences are based on a known objective probability distribution that was specified in the study protocol. Inferences from probability-based surveys may still suffer from many types of bias.

Surveys that are not based on probability sampling have greater difficulty measuring their bias or sampling error. Surveys based on non-probability samples often fail to represent the people in the target population.In academic and government survey research, probability sampling is a standard procedure. In the United States, the Office of Management and Budget's "List of Standards for Statistical Surveys" states that federally funded surveys must be performed:

selecting samples using generally accepted statistical methods (e.g., probabilistic methods that can provide estimates of sampling error). Any use of nonprobability sampling methods (e.g., cut-off or model-based samples) must be justified statistically and be able to measure estimation error.

Random sampling and design-based inference are supplemented by other statistical methods, such as model-assisted sampling and model-based sampling.For example, many surveys have substantial amounts of nonresponse. Even though the units are initially chosen with known probabilities, the nonresponse mechanisms are unknown. For surveys with substantial nonresponse, statisticians have proposed statistical models with which the data sets are analyzed.

Issues related to survey sampling are discussed in several sources, including Salant and Dillman (1994).

Time-use survey

A time-use survey is a statistical survey which aims to report data on how, on average, people spend their time.

World Association for Public Opinion Research

The World Association for Public Opinion Research (WAPOR) is an international professional association of researchers in the fields of communication and survey research. It is a member organization of the International Social Science Council.

Data collection
Data analysis
Applications
Major surveys
Associations
Qualitative forecasting methods

This page is based on a Wikipedia article written by authors (here).
Text is available under the CC BY-SA 3.0 license; additional terms may apply.
Images, videos and audio are available under their respective licenses.