- Past community surveys for the City have shown a pattern of under-representing certain racial-ethnic groups, age groups, non-English speakers, and the lower income – groups who are more likely to have negative experiences and opinions of City services. Loose dogs and potholes, for example, are more common in poor neighborhoods. To what extent would the positive ratings diminish if the voices of such residents were properly represented in the survey?
- Of course, the survey vendor’s quality of work may be spectacular, making it easier to eliminate the competition. However, the most recent City satisfaction report omitted standard demographic information about the 1,512 city residents that completed the survey. One has no idea if the survey respondents accurately reflected the diversity of this community by race, ethnicity, gender or age. This is information that is considered standard in industry research reports — information that is commonly used to judge the scientific credibility of the survey findings. Why have City staff allowed the omission of this important information from its report?
- Given the positive ratings that the City continues to enjoy from these surveys, it is not surprising that the survey company that conducts these surveys has enjoyed a preferred vendor status for many years. While the survey contract is bid competitively, the same out-of-state vendor has been successful in obtaining the contract year after year even though there are various local vendors that are equally qualified to conduct the work. Are City leaders and staff concerned that a different vendor would change the positive ratings that they enjoy?
survey bias
Is your multicultural research misleading marketing decisions?
Despite the dramatic growth of multicultural populations in the U.S., many survey companies continue to use outdated assumptions and practices in the design and execution of surveys in communities that are linguistically and culturally diverse. Following are some of the more problematic practices that may warrant your attention, whether you are a survey practitioner or a buyer of survey research.
Is Mayor Rawlings Hiding Behind Inflated Satisfaction Ratings of Dallas Residents?
The Texas Recipe for Muting the Hispanic Voice in Public Opinion Polls
- The 16,000 households selected as respondents received only an English-language version of the survey.
- The cover letter that was included with the English-language survey was provided only in English, and did not offer respondents any support to complete the survey in another language.
- A question that asked respondents to identify their race-ethnic background provided only one ethnic identifier for Spanish-speaking respondents — “Hispanic” – which could partly explain the under-count of these respondents because other labels are often preferred over the “Hispanic” option.
- A call center was supposedly set up to receive incoming calls from survey respondents that had questions or needed Spanish-language support. But this call center probably received few calls from Spanish-speaking respondents since the cover letter did not provide the needed contact information. Moreover, the report did not include a copy of the Spanish-language telephone survey that was supposedly used by the survey vendor’s call center to capture incoming calls by the survey respondents.
- The study design required that automated advanced calls (or “robo calls”) be made to the selected households prior to the survey mailing. Automated calls are a recognized nuisance from telemarketers and political campaigns that often discourage response rates to legitimate public opinion polls.
- The report indicated that the survey “participation rate” was 34.6 percent – a rate that appears subjectively created and not recognizable by the American Association of Public Opinion Research (2011). Instead, the overall survey response rate was more likely to be a much lower 9.7 percent (1,559 completions /16,000 invited participants) – not surprising given the recognized shortcomings in the methodology.
- While the vendor acknowledged that Hispanic respondents were significantly under-represented and non-Hispanic whites were over-represented, no explanation was provided about the potential causes or consequences of this imbalance.
- The fact that the survey planners ignored a previous warning about the potential flaws in the survey methodology suggests that the poor survey outcomes did not result from just simple carelessness.
To make matters worse, the same survey vendor was awarded a second contract to conduct another public opinion poll of Texas residents using the same flawed methodology. Why are state officials allowing such flawed practices to take place, especially at a time when the state’s population is being heavily impacted by the growing Hispanic population?
- Research firms that compete for opinion polls in the public sector should be required to produce evidence that they have the staff, facilities and past experience to conduct polls in linguistically and culturally diverse communities. If a research firm does not produce a representative sample of such communities in a contracted study, they should not be rewarded with another contract that utilizes the same flawed methodology.
- The committee members convened by public agencies to evaluate research proposals may not have the expertise to judge these proposals in terms of their adequacy for diverse communities. The inclusion of experts with experience in conducting polls in diverse communities may have prevented the missteps in the Texas A & M studies.
- In the haste to award a contract to the lowest bidder, proposal evaluators do not regularly check the references provided by the different bidders, but oddly enough still find a way to rate the relevant experiences of the bidders without this information. Prior to contract award, an audit should be conducted to ensure that such references were verified for all of the vendors that submitted a bid in such competitions.
It is unclear that the State of Texas got the “best value” by selecting the lowest bidder from outside of Texas. Indeed, what is the economic benefit to Texans when a contract is awarded to a non-Texas vendor whose payrolls, taxes and local spending for goods and services will only benefit another state?
National Poll on Arizona’s Immigration Law May Be Misleading
A recent national poll released by the Pew Research Center (5-12-10) reported widespread public support of Arizona’s new immigration law — a resounding 73 percent of the survey respondents! Headlines such as these, reinforced by the scientific credibility of an established polling organization, undoubtedly adds more momentum to the call for similar laws in other states.
Is the national mood really that supportive of Arizona’s new immigration law? Not being one to embrace polling results uncritically, I reviewed the study methodology and discovered that the entire survey was conducted in one language: English. Let me explain why this bias seriously limits the usefulness of the poll results.
Having conducted studies of multicultural populations over the past 30 years, I can assure you that two-thirds of Hispanics and 80 percent of Asians prefer to communicate in their native language when provided the choice. When a poll that includes these segments is conducted only in English, the results are predictable: lower response rates, less valid information, and more missing data. More importantly, because these respondents are more likely to be foreign-born, their exclusion from the Pew study has no doubt also inflated the reported level of public support for Arizona’s new law.
One only has to wonder why the Pew Research Center decided to address such a controversial topic in a manner that silenced the very voices that might have shared a different point of view about Arizona’s new immigration law.