Komputer & Telekomunikasi    
   
Daftar Isi
(Sebelumnya) Survey data collectionSurveyMonkey (Berikutnya)

Survey methodology

A hybrid field made up of statistics and social sciences, survey methodology studies the sampling of individuals from a population and data collection techniques (e.g., questionnaire design) with a view towards making statistical inferences about the population represented by the sample and the constructs represented by the measures (i.e., survey questions) used. Polls about public opinion, public health surveys, market research surveys, government surveys and Censuses are all examples of quantitative research that use contemporary survey methodology to answers questions about a population. Although Censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and nonresponse follow-up techniques. Surveys provide important information for all kinds of public information and research fields, e.g., marketing research, psychology, health professionals and sociology.[1]

A single survey is made of at least a sample (or full population in the case of a Census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling).

Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession meaning that some professionals in the field focus on study survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it.[2]

The most important methodological challenges of a survey methodologist include making decisions on how to:[2]

  • Identify and select potential sample members.
  • Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond).
  • Evaluate and test questions.
  • Select the mode for posing questions and collecting responses.
  • Train and supervise interviewers (if they are involved).
  • Check data files for accuracy and internal consistency.
  • Adjust survey estimates to correct for identified errors.

Contents

Selecting samples

Survey samples can be broadly divided into two types: probability samples and non-probability samples. Stratified sampling is a method of probability sampling such that sub-populations within an overall population are identified and included in the sample selected in a balanced way.

Modes of data collection

There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including 1) costs, 2) coverage of the target population, 3) flexibility of asking questions, 4) respondents' willingness to participate and 5) response accuracy. Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as:[3]

  • Telephone
  • Mail (post)
  • Online surveys
  • Personal in-home surveys
  • Personal mall or street intercept survey
  • Hybrids of the above.

Cross-sectional and longitudinal surveys

There is a distinction between one-time (cross-sectional) surveys, which involve a single questionnaire or interview administered to each sample member, and surveys which repeatedly collect information from the sample people over time. The latter are known as longitudinal surveys. Longitudinal surveys have considerable analytical advantages but they are also challenging to implement successfully. Consequently, specialist methods have been developed to select longitudinal samples, to collect data repeatedly, to keep track of sample members over time, to keep respondents motivated to participate, and to process and analyse longitudinal survey data [4]

Response formats

Usually, a survey consists of a number of questions that the respondent has to answer in a set format. A distinction is made between open-ended and closed-ended questions. An open-ended question asks the respondent to formulate his or her own answer, whereas a closed-ended question has the respondent pick an answer from a given number of options. The response options for a closed-ended question should be exhaustive and mutually exclusive. Four types of response scales for closed-ended questions are distinguished:

  • Dichotomous, where the respondent has two options
  • Nominal-polytomous, where the respondent has more than two unordered options
  • Ordinal-polytomous, where the respondent has more than two ordered options
  • (Bounded) continuous, where the respondent is presented with a continuous scale

A respondent's answer to an open-ended question can be coded into a response scale afterwards,[3] or analysed using more qualitative methods.

Advantages and disadvantages

Surveys are good solutions for many research questions but are often not the best solution. Making a wise decision involves understanding the trade-offs of different sources of survey error, costs (and funds available), and the ultimate uses of the statistics to be calculated from the data collected. A general list of advantages and disadvantages of surveys as data collection tools is below, but it is important to consider the specifics of a given situation to determine if a survey is best.

Advantages

  • They are relatively easy and inexpensive to administer for the simplest of designs.
  • Simply administering a survey does not require a lot of technical expertise, if quality of the data is not a major concern.
  • If conducted remotely, can reduce or obviate geographical dependence.
  • Useful in describing the characteristics of a large population assuming the sampling is valid.
  • Can be administered remotely via the Web, mobile devices, mail, e-mail, telephone, etc.
  • Efficient at collecting information from a large number of respondents for a fixed cost compared to other methods.
  • Statistical techniques can be applied to the survey data to determine validity, reliability, and statistical significance even when analyzing multiple variables.
  • Many questions can be asked about a given topic giving considerable flexibility to the analysis.
  • Support both between and within-subjects study designs.
  • A wide range of information can be collected (e.g., attitudes, values, beliefs, and behaviour).
  • Compared to qualitative interviewing, standardized survey questions provide all the participants with a standardized stimulus.

Disadvantages

The validity and reliability (i.e,. variance and bias) of survey data may depend on the following:

  • Respondents' motivation, honesty, memory, and ability to respond.
  • Respondents may not be fully aware of their reasons for any given action, making surveys weak methods for things that respondents cannot report consciously and accurately.
  • Structured surveys, particularly those with closed ended questions, may have low validity when researching affective variables.
  • Self-selection bias. Although the individuals chosen to participate in surveys are usually randomly sampled, errors due to nonresponse may exist (see also chapter 13 of Adér et al. (2008) for more information on how to deal with nonresponders bias in survey estimates). That is, people who choose to respond on the survey may be different from those who do not respond, thus biasing the estimates.

For example, people who are not at home regularly will be more difficult to contact than those who are at home a lot, and thus hard to contact with a face-to-face or telephone survey that uses only landline numbers.

  • The overall inference is limited by the sampling frame chosen. For example, polls or surveys that are conducted by calling a random sample of publicly available telephone numbers will not include the responses of people with unlisted telephone numbers, mobile (cell) phone numbers. Even random digit dial sampling frames of landlines have been shown to under-represent certain individuals (and their behaviors), specifically those who only have a cell phone.
  • Question and questionnaire design: Survey question answer-choices could lead to vague data sets because at times they are relative only to a personal abstract notion concerning "strength of choice". For instance the choice "moderately agree" may mean different things to different subjects, and to anyone interpreting the data for correlation. Even 'yes' or 'no' answers are problematic because subjects may for instance put "no" if the choice "only once" is not available.

Nonresponse reduction

The following ways have been recommended for reducing nonresponse[5] in telephone and face-to-face surveys:[6]

  • Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made/ or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey.
  • Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached.
  • Short introduction. The interviewer should always start with a short instruction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead led to a slightly higher responding rate.[7]
  • Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study.

Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important.[8] A 2010 study by SurveyMonkey looking at 100,000 of the online surveys they host found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with dropoff slowing (for example, only 10% reduction at 40 questions)[9] Other studies showed that quality of response degraded toward the end of long surveys.[10]

Other methods to increase response rates

  • financial incentives
    • paid in advance
    • paid at completion
  • non-monetary incentives
    • commodity giveaways (pens, notepads)
    • entry into a lottery, draw or contest
    • discount coupons
    • promise of contribution to charity
  • preliminary notification
  • foot-in-the-door techniques – start with a small inconsequential request
  • personalization of the request – address specific individuals
  • follow-up requests – multiple requests
  • emotional appeals
  • bids for sympathy
  • convince respondent that they can make a difference
  • guarantee anonymity
  • legal compulsion (certain government-run surveys)

Interviewer effects

Survey methodologists have devoted much effort to determine the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race [11] , gender [12] and relative body weight (BMI) .[13] These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes ,[14] interviewer sex responses to questions involving gender issues ,[15] and interviewer BMI answers to eating and dieting-related questions .[16] While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is that of social desirability. Survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions.

See also

  • Data Documentation Initiative
  • Enterprise feedback management (EFM)
  • Likert Scale
  • Official statistics
  • Paid survey
  • Quantitative marketing research
  • Questionnaire construction
  • Ratio estimator
  • Social research

Notes

  1. ^ http://whatisasurvey.info/
  2. ^ a b Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN 978-1-118-21134-2.
  3. ^ a b Mellenbergh, G.J. (2008). Chapter 9: Surveys. In H.J. Adèr & G.J. Mellenbergh (Eds.) (with contributions by D.J. Hand), Advising on Research Methods: A consultant's companion (pp. 183–209). Huizen, The Netherlands: Johannes van Kessel Publishing.
  4. ^ Lynn, P. (2009) (Ed.) Methodology of Longitudinal Surveys. Wiley. ISBN 0-470-01871-2
  5. ^ Lynn, P. (2008) "The problem of non-response", chapter 3, 35-55, in International Handbook of Survey Methodology (ed.s E.de Leeuw, J.Hox & D.Dillman). Erlbaum. ISBN 0-8058-5753-2
  6. ^ Dillman, D.A. (1978) Mail and telephone surveys: The total design method. Wiley. ISBN 0-471-21555-4
  7. ^ De Leeuw, E.D. (2001). "I am not selling anything: Experiments in telephone introductions". Kwantitatieve Methoden, 22, 41–48.
  8. ^ Bogen, Karen (1996). "THE EFFECT OF QUESTIONNAIRE LENGTH ON RESPONSE RATES -- A REVIEW OF THE LITERATURE". Proceedings of the Section on Survey Research Methods (American Statistical Association): 1020-1025. http://www.census.gov/srd/papers/pdf/ kb9601.pdf. Retrieved 2013-03-19.
  9. ^ "Does Adding One More Question Impact Survey Completion Rate?". 2010-12-10. http://blog.surveymonkey.com/blog/201 0/12/08/survey_questions_and_completi on_rates/. Retrieved 2013-03-19.
  10. ^ http://www.research-live.com/news/new s-headlines/respondent-engagement-and -survey-length-the-long-and-the-short -of-it/4002430.article
  11. ^ Hill, M.E (2002). "Race of the interviewer and perception of skin color: Evidence from the multi-city study of urban inequality". American Sociological Review 67 (1): 99–108. http://www.jstor.org/stable/3088935.
  12. ^ Flores-Macias, F.; Lawson, C. (2008). "Effects of interviewer gender on survey responses: Findings from a household survey in Mexico". International Journal of Public Opinion Research 20 (1): 100–110. doi:10.1093/ijpor/edn007.
  13. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.; Van Strien, T. (2011). "BMI of interviewer effects". International Journal of Public Opinion Research 23 (4): 530–543. doi:10.1093/ijpor/edr026.
  14. ^ Anderson, B.A.; Abramson, B.D. (1988). "The effects of the race of the interviewer on race-related attitudes of black respondents in SRC/CPS national election studies". Public Opinion Quarterly 52 (3): 1–28. doi:10.1086/269108.
  15. ^ Kane, E.W.; Macaulay, L.J. (1993). "Interviewer gender and gender attitudes". Public Opinion Quarterly 57 (1): 1–28. doi:10.1086/269352.
  16. ^ Eisinga, R.; Te Grotenhuis, M.; Larsen, J.K.; Pelzer, B.. "Interviewer BMI effects on under- and over-reporting of restrained eating. Evidence from a national Dutch face-to-face survey and a postal follow-up". International Journal of Public Health 57 (3): 643–647. doi:10.1007/s00038-011-0323-z.

References

  • Abramson, J.J. and Abramson, Z.H. (1999).Survey Methods in Community Medicine: Epidemiological Research, Programme Evaluation, Clinical Trials (5th edition). London: Churchill Livingstone/Elsevier Health Sciences ISBN 0-443-06163-7
  • Groves, R.M. (1989). Survey Errors and Survey Costs Wiley. ISBN 0-471-61171-9
  • Ornstein, M.D. (1998). "Survey Research." Current Sociology 46(4): iii-136.
  • Shaughnessy, J. J., Zechmeister, E. B., & Zechmeister, J. S. (2006). Research Methods in Psychology (Seventh Edition ed.). McGraw–Hill Higher Education. ISBN 0-07-111655-9 (pp. 143–192)
  • Adèr, H. J., Mellenbergh, G. J., & Hand, D. J. (2008). Advising on research methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing.
  • Dillman, D.A. (1978) Mail and telephone surveys: The total design method. New York: Wiley. ISBN 0-471-21555-4

Further reading

External links


(Sebelumnya) Survey data collectionSurveyMonkey (Berikutnya)