What the U.S. Presidential Election taught us about customer experience surveys
If there is one thing that the recent U.S. presidential results taught us, it is that taking time to get to know your customers is crucial.
Failure to get a true read on the audience you are trying to influence can have devastating consequences. Your success may be based on a shaky foundation of false assumptions. If there is one thing that the recent U.S. presidential results taught us, it is that knowing your customers is crucial.
On Election Day, pollsters were just shy of universally predicting Clinton the victor. The New York Times gave Clinton an 85% chance of victory; the Huffington Post 98%; the Princeton Election Consortium 99%.
By the end of the evening, as we now know, “pollsters flubbed the 2016 presidential election in seismic fashion.” Polling as a practice and as a profession was dealt a blow from which it will not soon recover.
I found myself very sympathetic to the people who make a living out of gathering and analyzing political data. After all, data-driven planning is a “best” practice. “Shooting from the hip” and “gut feelings” are so, well, yesterday. Yet, clearly, “best” wasn’t good enough.
On Election Night, before the winner was announced, the featured story seemed to be, “how could the pollsters have gotten it so wrong?” Several theories emerged:
As someone involved in a customer experience transformation based on Voice of the Customer feedback and other data, I couldn’t help but wonder: if the Washington pundits could be so wrong, could my “pollsters”–my team that manages the execution of the Net Promoter System (NPS) methodology to measure customer sentiment–be wrong as well?
I ran through each theory with an eye toward uncovering similar vulnerabilities in my company’s own Voice of the Customer program.
Yes, there are vulnerabilities in any polling and survey processes. To counter those vulnerabilities, we need to improve both the quantity and quality of the feedback.
We can improve the quantity in the hopes that increased sample sizes representing increased diversity are more likely to normalize around the truth.
We can improve the quality by, wherever possible, establishing trust and conducting face-to-face meetings. With this combination, we are more likely to obtain the purest, most truthful, most valuable feedback.
I think the election pollsters got it wrong because they failed to gauge voter loyalty. The NPS methodology is based on a single question meant to measure customer loyalty over customer satisfaction. Rather than ask “Who are you voting for,” NPS calls for asking, “How likely are you to recommend this candidate?” The latter question leads to measuring loyalty and I believe would have flushed out quite a few of those shy Trump supporters.
Do you think that surveys yield honest feedback?