Informatika & Komputer    
   
Daftar Isi
(Sebelumnya) SurveillanceSurvey methodology (Berikutnya)

Survey data collection

With the application of probability sampling in the 1930s, surveys became a standard tool for empirical research in social sciences, marketing and official statistics.[1] The methods involved in survey data collection are any of a number of ways in which data can be collected for a statistical survey. These are methods that are used to collect information from a sample of individuals in a systematic way. First there was the change from traditional paper-and-pencil interviewing (PAPI) to computer-assisted interviewing (CAI). Now, face-to-face surveys (CAPI), telephone surveys (CATI), and mail surveys (CASI, CSAQ) are increasingly replaced by web surveys.[2]

Contents

Modes of data collection

There are several ways of administering a survey. Within a survey, different methods can be used for different parts. For example, interviewer administration can be used for general topics but self-administration for sensitive topics. The choice between administration modes is influenced by several factors, including 1) costs, 2) coverage of the target population, 3) flexibility of asking questions, 4) respondents’ willingness to participate and 5) response accuracy. Different methods create mode effects that change how respondents answer. The most common modes of administration are listed under the following headings.[3]

Mobile Data Collection Surveys

Mobile data collection or mobile surveys is an increasingly popular method of data collection. The survey, form, app or collection tool is on a mobile device such as a smart phone or a tablet. These devices offer innovative ways to gather data regardless of time and location of the respondent. Apart from the high mobile phone penetration, further advantages are quicker response times and the possibility to reach previously hard-to-reach target groups.[4] In some cases the program is designed for one specific type of data collection, for example the graffiti data collection tool GraffitiMapper. There is an growing need for mobile data collection apps that can be customized to any industry, project or survey. An advantage of using a mobile device for data collection is to gather information more discreetly if needed. For example, an observation survey could be performed on a smart phone or tablet instead of a clipboard or stack of survey papers. There are mobile data collection programs that allow the collector to sync data to a server to ensure the data is not lost. Remote data collection is also performed on mobile devices. Some mobile data collection programs have the ability to collect data offline and then sync with a server when in range of a signal. An example of a custom mobile data collection program, its usage and development is discussed in this blog by Fulcrum. Mobile data collection and reporting projects are abundant now that mobile use for development is taking off.[5]

Online surveys

Online (Internet) surveys are becoming an essential research tool for a variety of research fields, including marketing, social and official statistics research. According to ESOMAR online survey research accounted for 20% of global data-collection expenditure in 2006.[1] They offer capabilities beyond those available for any other type of self-administered questionnaire.[6] Online consumer panels are also used extensively for carrying out surveys but the quality is considered inferior because the panelists are regular contributors and tend to be fatigued.

Top online survey websites

Advantages of online surveys

  • Web surveys are faster, simpler and cheaper.[2] However, lower costs are not so straightforward in practice, as they are strongly interconnected to errors. Because response rate comparisons to other survey modes are usually not favourable for online surveys, efforts to achieve a higher response rate (e.g. with traditional solicitation methods) may substantially increase costs.[1]
  • The entire data collection period is significantly shortened, as all data can be collected and processed in little more than a month.[2]
  • Interaction between the respondent and the questionnaire is more dynamic compared to e-mail or paper surveys.[6] Online surveys are also less intrusive, and they suffer less from social desirability effects.[2]
  • Complex skip patterns can be implemented in ways that are mostly invisible to the respondent.[6]
  • Pop-up instructions can be provided for individual questions to provide help with questions exactly where assistance is required.[6]
  • Questions with long lists of answer choices can be used to provide immediate coding of answers to certain questions that are usually asked in an open-ended fashion in paper questionnaires.[6]
  • Online surveys can be tailored to the situation (e.g. respondents may be allowed save a partially completed form, the questionnaire may be preloaded with already available information, etc.).[2]
  • Online questionnaires may be improved by applying usability testing, where usability is measured with reference to the speed with which a task can be performed, the frequency of errors and user satisfaction with the interface.[2]

Key methodological issues of online surveys

  • Sampling. The difference between probability samples (where the inclusion probabilities for all units of the target population is known in advance) and non-probability samples (which often require less time and effort but generally do not support statistical inference) is crucial. Probability samples are highly affected by problems of non-coverage (not all members of the general population have Internet access) and frame problems (online survey invitations are most conveniently distributed using e-mail, but there are no e-mail directories of the general population that might be used as a sampling frame). Because coverage and frame problems can significantly impact data quality, they should be adequately reported when disseminating the research results.[1]
  • Invitations to online surveys. Due to the lack of sampling frames many online survey invitations are published in the form of an URL link on web sites or in other media, which leads to sample selection bias that is out of research control and to non-probability samples. Traditional solicitation modes, such as telephone or mail invitations to web surveys, can help overcoming probability sampling issues in online surveys. However, such approaches are faced with problems of dramatically higher costs and questionable effectiveness.[1]
  • Non-response. Online survey response rates are generally low and also vary extremely – from less than 1% in enterprise surveys with e-mail invitations to almost 100% in specific membership surveys. In addition to refusing participation, terminating surveying during the process or not answering certain questions, several other non-response patterns can be observed in online surveys, such as lurking respondents and a combination of partial and item non-response. Response rates can be increased by offering monetary or some other type of incentive to the respondents, by contacting respondents several times (follow-up), and by keeping the questionnaire difficulty as low as possible.[1]
  • Questionnaire design. While modern web questionnaires offer a range of design features (different question types, images, multimedia), the use of such elements should be limited to the extent necessary for respondents to understand questions or to stimulate the response. It should not affect their responses, because that would mean lower validity and reliability of data. Appropriate questionnaire design can help lowering the measurement error that can arise also due to the respondents or the survey mode itself (respondent’s motivation, computer literacy, abilities, privacy concerns, etc.).[1]
  • Post-survey adjustments. Various robust procedures have been developed for situations where sampling deviate from probability selection, or, when we face non-coverage and non-response problems. The standard statistical inference procedures (e.g. confidence interval calculations and hypothesis testing) still require a probability sample. The actual survey practice, particularly in marketing research and in public opinion polling, which massively neglects the principles of probability samples, increasingly requires from the statistical profession to specify the conditions where non-probability samples may work.[1]

It is important that uncontrolled variations in how a questionnaire appears is minimized. Web-based survey methods make the construction and delivery of questionnaire instruments relatively easy, but what is difficult to ensure is that everyone sees the questionnaire as its designer intended it to be. This problem can arise due to the variability of software and hardware used by respondents.[8]

Telephone

  • use of interviewers encourages sample persons to respond, leading to higher response rates.[9]
  • interviewers can increase comprehension of questions by answering respondents' questions.
  • fairly cost efficient, depending on local call charge structure
  • good for large national (or international) sampling frames
  • some potential for interviewer bias (e.g. some people may be more willing to discuss a sensitive issue with a female interviewer than with a male one)
  • cannot be used for non-audio information (graphics, demonstrations, taste/smell samples)
  • unreliable for consumer surveys in rural areas where telephone density is low[10]
  • three types:
    • traditional telephone interviews
    • computer assisted telephone dialing
    • computer assisted telephone interviewing (CATI)

Mail

  • the questionnaire may be handed to the respondents or mailed to them, but in all cases they are returned to the researcher via mail.
  • An advantage is, is that cost is very low, since bulk postage is cheap in most countries
  • long time delays, often several months, before the surveys are returned and statistical analysis can begin
  • not suitable for issues that may require clarification
  • respondents can answer at their own convenience (allowing them to break up long surveys; also useful if they need to check records to answer a question)
  • no interviewer bias introduced
  • non-response bias can be corrected by extrapolation across waves[11]
  • large amount of information can be obtained: some mail surveys are as long as 50 pages
  • response rates can be improved by using mail panels
  • response rates can be improved by using monetary incentives[12]
  • response rates is affected by the class of mail through which the survey was sent[13]
    • members of the panel have agreed to participate
    • panels can be used in longitudinal designs where the same respondents are surveyed several times

Face-to-face

  • suitable for locations where telephone or mail are not developed
  • potential for interviewer bias
  • easy to manipulate by completing multiple times to skew results

Mixed-mode surveys

Researchers can combine several above methods for the data collection. For example, researchers can invite shoppers at malls, and send willing participants questionnaires by emails. With the introduction of computers to the survey process, survey mode now includes combinations of different approaches or mixed-mode designs. Some of the most common methods are:[14]

  • Computer-assisted personal interviewing (CAPI): The computer displays the questions on screen, the interviewer reads them to the respondent, and then enters the respondent's answers.
  • Audio computer-assisted self-interviewing (audio CASI): The respondent operates the computer, the computer displays the question on the screen and plays recordings of the questions to the respondents, who then enters his/her answers.
  • Computer-assisted telephone interviewing (CATI)
  • Interactive voice response (IVR): The computer plays recordings of the questions to respondents over the telephone, who then respond by using the keypad of the telephone or speaking their answers aloud.
  • Web surveys: The computer administers the questions online.

References

  1. ^ a b c d e f g h Vehovar, V.; Lozar Manfreda, K. (2008). "Overview: Online Surveys". In Fielding, N.; Lee, R. M.; Blank, G.. The SAGE Handbook of Online Research Methods. London: SAGE. pp. 177–194. ISBN 978-1-4129-2293-7.
  2. ^ a b c d e f Bethlehem,, J.; Biffignandi, S. (2012). Handbook of Web Surveys. Wiley Handbooks in Survey Methodology. 567. New Jersey: John Wiley & Sons. ISBN 978-1-118-12172-6.
  3. ^ Mellenbergh, G.J. (2008). "Surveys". In Adèr, H.J.; Mellenbergh, G.J.. Advising on Research Methods: A consultant's companion. Huizen, The Netherlands: Johannes van Kessel Publishing. pp. 183–209. ISBN 978-90-79418-01-5.
  4. ^ "Reaching the Mobile Respondent: Determinants of High-Level Mobile Phone Use Among a High-Coverage Group". Social Science Computer Review. http://ssc.sagepub.com/content/28/3/3 36.abstract.
  5. ^ "Mobile Phones for Data Collection". http://mobileactive.org/howtos/mobile -phones-data-collection.
  6. ^ a b c d e Dillman, D.A. (2006). Mail and Internet Surveys: The Tailored Design Method (2nd Edition ed.). New Jersey: John Wiley & Sons. ISBN 978-0-470-03856-7.
  7. ^ a b c d Scott, Martin (27 August 2012). "Customer research easier in digital era". USA Today. http://usatoday30.usatoday.com/MONEY/ usaedition/2012-08-28-Efficient-Small -Business-Ecommerce_CV_U.htm. Retrieved 20 February 2013.
  8. ^ . However, survey sites like relicheck.com provides users a way to minimize this concern by provide survey reliability analysis as part of the results. This method provides users an opportunity to assess their survey for reliable answers before users publish the results of their survey. Fricker, R. D. (2008). "Sampling Methods for Web and E-mail Surveys". In Fielding, N.; Lee, R. M.; Blank, G.. The SAGE Handbook of Online Research Methods. London: SAGE. pp. 195–216. ISBN 978-1-4129-2293-7.
  9. ^ Groves, R.M. (1989). Survey Costs and Survey Errors. New York: Wiley. ISBN 978-0-471-67851-9.
  10. ^ "When telephone surveys are not enough". http://www.synovate.com/changeagent/i ndex.php/site/full_story/when_telepho ne_surveys_are_not_enough/.
  11. ^ J. Scott Armstrong and Terry S. Overton (1977). "Estimating Nonresponse Bias in Mail Surveys". Journal of Marketing Research 14: 396–402. http://marketing.wharton.upenn.edu/id eas/pdf/Armstrong/EstimatingNonrespon seBias.pdf.
  12. ^ J. Scott Armstrong (1975). "Monetary Incentives in Mail Surveys". Public Opinion Quarterly 39: 111–116. http://www.forecastingprinciples.com/ paperpdf/Monetary%20Incentives.pdf.
  13. ^ J. Scott Armstrong (1990). "Class of Mail Does Affect Response Rates to Mailed Questionnaires: Evidence from Meta-Analysis (with a Reply by Lee Harvey)". Journal of the Market Research Society 32: 469–472. http://www.forecastingprinciples.com/ paperpdf/Class%20of%20Mail.pdf.
  14. ^ Groves, R.M.; Fowler, F. J.; Couper, M.P.; Lepkowski, J.M.; Singer, E.; Tourangeau, R. (2009). Survey Methodology. New Jersey: John Wiley & Sons. ISBN 978-1-118-21134-2.



(Sebelumnya) SurveillanceSurvey methodology (Berikutnya)