No data is just as bad as bad data. The digital age ushered in ease in survey creation. Now anybody can set up a survey in a matter of minutes; that’s both a blessing and a curse.The ease of survey creation has made surveys more and more commonplace. To think that as far back as 1998, columnist Arianna Huffington wrote: “It’s no wonder that the mushrooming number of opinion polls, coupled with the outrageous growth of telemarketing calls, have led to a soaring refuse-to-answer rate among people polled”.
In the digital age, it has gone from ‘refusal to participate’ to an outright immunity to surveys. The downside to low response rates, asides acting as a poor compass for decision making, is that the collected data, if any, isn’t often representative of the population. Then again there’s the increased possibility of non-response bias. Information from Nonresponse in Social Science Surveys: A Research Agenda has it that “nonresponse creates the potential for bias in estimates, in turn affecting survey design, data collection, estimation, and analysis.”
For a few universities that rely on advanced software and experts for developing surveys, the cost of designing and administering surveys can get out of hand really fast. Usually an expert data analyst is able to spot bad data and of course alert the institution of the impending futility of using the same data. If an initial attempt to collect data from students regarding a particular issue was met with low or poor responses, the chances are that HEIs will likely go back to the drawing board and come up with a better way to get the needed data. This will of course lead to more spending on survey redesign, administration and organisation.
Survey fatigue can also result in the common issue of response bias. Response bias occurs when respondents either do not respond truthfully or give answers they feel the researchers wants to hear resulting in the favouring of certain outcomes. According a paper by Brian et al, “The response given is a function of both the true response and participant response fatigue”. Given that most students are often ‘required’ to fill out surveys, they resort to filling out random answers or favourable answers (in hopes it will be the last of that survey) as a way of dealing with survey fatigue. Response bias as a result of survey fatigue takes us back to “bad data”. This results in misguided decisions which would of course result in wasted resources.
All these end up having a significant effect on cirtical decisions on teaching and learning. It’s of course obvious that when these critical decions are based on bad data, the result is a series of managerial decions that have no impact on teaching and learning excellence!