One of our customers is a pure-play e-commerce business. They wanted to quantify interest in a radical new service option. To gain competitive advantage, they needed answers quickly — much faster than it would take if they gathered responses through their continuously running intercept survey, which invites 1 in every 100 visitors. To install code for a short, stand-alone survey, however, the research folks would typically have had to wait for the next code push, a monthly event. In this case, they had just missed the August push.
The site, however, uses WA(RP)® tags. WA(RP)® tags effectively transform the website into an always-on research channel. WA(RP)® obviates the need for IT involvement, so the research team did not need to sync its spot survey to the code-push schedule. In this instance, a 6-question branching survey went up the next day, which was a Friday. By Monday morning, the research team was looking at a dashboard containing 4,000 responses. Interest in the new service option was overwhelmingly positive.
But this is where the situation becomes interesting. As always, data begets more questions. Seeing the initial data, the CEO wanted to quantify the conversion lift the new service could create. So the survey was modified. This time, the focus was on quantifying the lift from visitors who researched a product on the site but went to a local store (in other words to a competitor) to buy it. That modified survey went live on the Thursday evening. Again, the team had a dashboard waiting for it on Monday morning that allowed them to quantify the potential conversion lift. The CEO made the decision to go forward with the new service that same week, confident that the decision was based on the best sample they could possibly gather – their own site visitors. Speed, sample quality and size, and data relevance make for confident decision-making.
–Roger Beynon, CSO, Usability Sciences