Political leaders often embrace positive satisfaction ratings, but should they?
It should come as no surprise that political leaders enjoy quoting the positive ratings from surveys of the communities that they serve – a sort of badge of honor for their job performance.  Ex-Dallas City Manager A.C. Gonzales is no exception, making reference to a recent citizen satisfaction survey of 1,512 Dallas City residents that showed an overall community that, with some exceptions, appeared quite happy with City services.  Mayor Mike Rawlings has also referenced these positive ratings from these surveys as well. At the national level, Republican nominee Donald Trump recently pointed to positive student satisfaction ratings to counter allegations of fraud in lawsuits against Trump University.  Indeed, positive ratings are like candy to politicians, whether deserved or not.
But how much faith can we place in these satisfaction ratings? In a recent column by Dallas Morning News columnist Robert Wilonsky, he noted the apparent paradox of the City’s continuing high ratings given the multitude of problems that are left unresolved, such as potholes, loose dogs mauling citizens in poor neighborhoods, contracting irregularities, deteriorating air quality, traffic congestion, and a host of other issues.  Wilonsky also pointed out that the survey vendor’s report curiously omitted information about the ages represented by the study respondents.  Indeed, the report tells us nothing about the satisfaction levels across racial-ethnic groups, income groups, age groups or other key demographics – information that would provide more insight on how well the study sample mirrored Dallas’ diverse population. The City of Dallas is now 41 percent Latino, 24 percent black, 3 percent Asian, and 29 percent white – a diverse community of residents that are entitled to have their voices heard in surveys sponsored by their tax dollars.
While City leaders have no problem embracing citizen satisfaction ratings, we should be cautious about embracing the results of satisfaction surveys, especially those that consistently show their sponsors in a positive light. In the case of the City of Dallas, there is reason to believe that these satisfaction ratings could be inflated and a self-serving exercise for City leaders:
  • Past community surveys for the City have shown a pattern of under-representing certain racial-ethnic groups, age groups, non-English speakers, and the lower income  – groups who are more likely to have negative experiences and opinions of City services. Loose dogs and potholes, for example, are more common in poor neighborhoods.  To what extent would the positive ratings diminish if the voices of such residents were properly represented in the survey?
  • Of course, the survey vendor’s quality of work may be spectacular, making it easier to eliminate the competition. However, the most recent City satisfaction report omitted standard demographic information about the 1,512 city residents that completed the survey.  One has no idea if the survey respondents accurately reflected the diversity of this community by race, ethnicity, gender or age. This is information that is considered standard in industry research reports — information that is commonly used to judge the scientific credibility of the survey findings. Why have City staff allowed the omission of this important information from its report?
  • Given the positive ratings that the City continues to enjoy from these surveys, it is not surprising that the survey company that conducts these surveys has enjoyed a preferred vendor status for many years. While the survey contract is bid competitively, the same out-of-state vendor has been successful in obtaining the contract year after year even though there are various local vendors that are equally qualified to conduct the work.  Are City leaders and staff concerned that a different vendor would change the positive ratings that they enjoy?  
          Community satisfaction ratings provide one measure of the City’s performance in serving a community, but provide an incomplete picture of its actual performance since key groups are often omitted or under-represented in such studies. The fascination of City leaders with these positive ratings and comparisons to other U.S. cities creates the false impression that everything in Dallas is just peachy.  A guided tour of City neighborhoods tells quite a different story.

Clearly, the next City Manager for Dallas, as well as the next Mayor, will have a long list of City-related needs that will require their immediate attention. If the results of citizen satisfaction surveys continue to be used by City leaders and staff as a benchmark of their annual or periodic performance, some changes will be needed to inspire more confidence in the ratings provided by this survey.  First, it is absolutely essential that the public is provided access to a detailed methodology that describes the steps used to conduct the study, including the extent of support in languages other than English.  This is important because many studies confirm that over half of Latino and Asian adults prefer to communicate in their native language, a fact that improves comprehension and survey participation.  Second, the report must provide a detailed demographic profile of the survey respondents – a standard requirement in all research industry studies – and perhaps the only evidence that the random selection of City households resulted in a fair and unbiased representation of the City’s diverse community.  Lastly, to remove the appearance of favoritism in the vendor selection process, City staff should be required to justify the continued selection of one vendor over several years despite the availability of various equally qualified survey vendors.
Is your multicultural research misleading marketing decisions?

Despite the dramatic growth of multicultural populations in the U.S., many survey companies continue to use outdated assumptions and practices in the design and execution of surveys in communities that are linguistically and culturally diverse. Following are some of the more problematic practices that may warrant your attention, whether you are a survey practitioner or a buyer of survey research.

1. Is your survey team culturally sterile?
If your survey team lacks experience conducting surveys in diverse communities, you may  already be dead on arrival. Since most college courses on survey or marketing research do not address the problems that are likely to occur in culturally-diverse communities, mistakes are very likely to happen.  An experienced multicultural survey team member is needed to assess the study challenges and resources. Really, how else will you know if something goes wrong?
2.  Are you planning to outsource to foreign companies?
So your firm has decided to outsource its Latino or Asian surveys instead of hiring your own bilingual interviewers. Think twice about this.  If you have ever monitored interviews conducted by foreign survey shops, you are likely to discover several issues that impact survey quality: language articulation problems, and a lack of familiarity with U.S. brands, institutions, and geography.  The money that you save by outsourcing will not fix the data quality issues that will emerge from these studies. Better to use an experienced, U.S. based research firm with multilingual capabilities that does not outsource to foreign survey shops.
3. Are you forcing one mode of data collection on survey respondents?
Think about it —  mail surveys require reading and writing ability; phone surveys require one to speak clearly; and online surveys require reading ability and Internet access. Forcing one mode of data collection can exclude important segments of consumers that can bias your survey results. Increasingly, survey organizations are using mixed-mode methods (i.e., combination of mail, phone and online) to remove these recognized limitations, and achieving improved demographic representation and better quality data.
4. English-only surveys make little sense in a multicultural America.
Of course, everyone in America should be able to communicate in English, and most do. But our own experience confirms that two-thirds of Latino adults and 7 in 10 Asians prefer a non-English interview when given a choice. The reason is simple: Latino and Asian adults have large numbers of immigrants who understand their native language better than English – which translates to enhanced comprehension of survey questions,, more valid responses, and improved response rates.  Without bilingual support, the quality of survey data is increasingly suspect in today’s diverse communities.
5. Are you still screening respondents with outdated race-ethnic labels?
Multicultural persons dislike surveys that use outdated or offensive race-ethnic labels that are used to classify them – which can result in the immediate termination of the interview, misclassification of survey respondents, or missing data. Published research by the Pew Research Center and our own experience suggests that it is better to use multiple rather than single labels in a question: that is, “Do you consider yourself Black or African American, Hispanic or Latino, Asian or Asian American, white or Anglo American?” Since Latinos and Asians identify more strongly with their country of origin, it is a good idea to record their country of origin or provide a listing of the countries represented by the terms Latino or Asian.  Use of the label Caucasian is often used along with the white label, but should be avoided because the Caucasian category also includes Latinos.
6.  Are your survey respondents consistently skewed towards women?
A common problem is that multicultural males are considerably more reluctant than white males to participate in surveys, which often results in survey data that is overly influenced by female sentiments and behaviors. The imbalance often results from the poor management of interviewers who dedicate less effort to getting males to cooperate. Rather than improve data collection practices that create such imbalances, survey analysts will typically apply post-stratification weights to correct the imbalance even when large imbalances are found – a practice that can distort the survey results.  It is always a good practice to review both un-weighted and weighted survey data to judge the extent of this problem.
7.  Online panels are not the solution for locally-focused multicultural studies.
With high anxiety running throughout the survey industry from the recent FCC settlement of $12 million with the Gallup Organization, many survey companies will likely replace their telephone studies with online panels.  For nationally-focused surveys, online panels may be an adequate solution to reach a cross section of multicultural online consumers. For local markets, however, the number of multicultural panel members is often insufficient to complete a survey with a minimum sample of 400 respondents. Worse yet, the majority of multicultural panel members are the more acculturated, English-speaking, higher income individuals – immigrants are minimal on such panels. Online panel companies will have to do a better job of expanding their participants with multicultural consumers. In the meantime, don’t get your hopes too high.
8.  Translators are definitely not the last word on survey questionnaires.
So your questionnaire has just been translated by a certified translator, and you are confident that you are ready to begin the study of multicultural consumers. After a number of interviews, however, you learn that the survey respondents are having difficulty understanding some of the native language vocabulary being used, and interviewers are having to “translate-on-the-fly” by substituting more familiar wording – a major problem in multicultural studies. It is obvious that the survey team placed undue confidence on the work of the certified translator, and did not conduct a pilot study of the translated questionnaire to check for its comprehension and relevance among the target respondents.  A good pilot study can save you time, money and headaches.
These tips represent only a partial listing of the many ways in which a survey can misrepresent multicultural communities.  Industry recognition of these types of problems is a first step towards their elimination, although survey practitioners are slow to change their preferred ways of collecting data. Raising the standards for multicultural research will perhaps pick up steam once higher education institutions require the study of these issues in their research courses, and buyers of research require higher standards from research vendors.

You can reach Dr. Rincón at edward@rinconassoc.com

© Rincón & Associates LLC 2015

Is Mayor Rawlings Hiding Behind Inflated Satisfaction Ratings of Dallas Residents?
“Dallas residents generally say they’re more satisfied than people in many other cities.” 
According to the Dallas Morning News, that is the response that Mayor Rawlings gave to challenger Marcos Ronquillo during their recent debate at the Belo Mansion when Mr. Ronquillo challenged the Mayor’s misplaced priorities on the Trinity toll road issue. As Mr. Ronquillo asserted, it makes little sense to make such an expensive investment of questionable value given the evidence that the City’s urban core was crumbling – the third highest poverty rate in the nation, a public school system beset by many problems, and thousands of pot holes that residents endure on a daily basis.  But are Dallas residents really more satisfied than people in other cities?  A closer look at how these satisfaction ratings are produced should raise some eyebrows.
We are all accustomed to hearing of efforts to inflate performance ratings – colleges leaving out the test scores of athletes, and school districts omitting or doctoring the test scores of low performers – all efforts to inflate performance and deceive the public. Although less obvious to the public, opinion polling firms also use questionable practices to distort survey results.   In reviewing the survey reports for the City’s satisfaction ratings, it turns out that the ratings are inflated because segments of City residents who are the most likely to receive poor services are excluded from the surveys. Curiously, for several years now the City has awarded the contract for satisfaction surveys to the same survey company that uses the same flawed methodology to produce the same inflated ratings. Really makes you wonder.  The reports are available to the public for their own independent review.

Mayor Rawlings, you owe the public an explanation about the manner in which these satisfaction ratings are produced. More importantly, you cannot hide behind inflated satisfaction ratings that have little credibility.  The public deserves to get a more reasoned explanation about your willingness to overlook the City’s crumbling infrastructure while you continue to promote the questionable investment in the Trinity toll road.