premium User feedback can be found in unlikely places

In 2013, customer experience firm Walker released a report in which it predicted that by 2020, user experience would be the key differentiator for brands, and price and product would become less important to users when choosing among different digital services.

Well, 2020 is here, and that prediction seems to have been pretty accurate. In today’s highly competitive digital economy, offering your customers a product they actually like to use is key. Unless you have a highly specific, one-of-a-kind application, there’s always another — possibly better —  option for your users to switch to if you’re not providing what they need. 

The testing phase of the software development life cycle may help find bugs in an application, but it can’t catch everything. To ensure that your users are actually enjoying the time spent in your application, you need to continually gather feedback from those users. 

“Even a program that is coded perfectly and meets the standards of the stakeholders may not work in the way certain users interact with the application,” said Nikhil Koranne, assistant vice president of operations at software development company Chetu

RELATED CONTENT:
UX design: It takes a village
Mapping customer journeys: The path to better UX

A 2019 survey from Qualtrics revealed that in 2020, 81% of respondents had planned to increase their focus on customer experience. The report also showed that 71% of companies with customer experience initiatives saw positive value from those efforts.

By gathering user feedback, development teams can continually improve and finetune their applications to give their users requested features and fix issues that might not have been discovered during testing. 

There are a number of ways that teams can collect user feedback, some of which might just occur naturally. For example, user reviews or support questions are things that might come in naturally that teams can use as feedback. If a team is getting a lot of support questions about a certain feature because they’re finding it difficult to use, they can utilize that information in determining what needs to be worked on in future releases. “With hundreds of questions a day, we keep a pulse on what people are asking for or where we could make parts of our product easier to understand. We aggregate these support conversations and share common themes to help the product team prioritize,” said Zack Hendlin, VP of Product at OneSignal.

Hendlin also said that his team collects feedback in other forms, such as data analysis, user research sessions, and conversations with customers.

The team analyzes user data, such as where users start an action and where they drop off. “Looking at points where there are big dropoffs in integrating us into their site, viewing delivery statistics, upgrading to a paid plan for more features, and the like allow us to optimize those parts of the user journey,” he said. Hendlin added that useful tools for this type of user data analysis are browse maps and tools, such as HotJar and Google Analytics. 

Itamar Blauer, an SEO consultant, also said he found Hotjar to be a helpful tool to track user behavior across sites. “The best ways that I found to monitor user experience were through heatmap analyses. I would use tools such as Hotjar to track user behavior across a website, identifying the portions of content they were most attracted to, as well as seeing at which part of the page they would exit.”

User research sessions are sessions in which a select number of users get access to an early version of a new release. According to Hendlin, this process can help answer the following questions: “Is the way we are planning to solve the problem actually solving the problem for them? Is it easy to use without needing explanation? Are there needs or desires they have that we haven’t thought about?”

User research sessions are also referred to as user acceptance testing (UAT), which often occurs in the last phase of the development cycle, explained Chetu’s Koranne.

According to Koranne, UAT is typically handled by the project management team. The team is responsible for setting up the parameters of the testing environment, such as the testing group and script of commands. This team then delivers the results of testing back to the developers. Koranne recommends that beta release participants be selected carefully and thoughtfully. 

“The ideal testing group that project managers are looking for would consist of third-party, real-world users with relevant experience,” said Koranne. “These types of users will be able to maneuver through the programs without any preconceived notions of how the process should work, and approach each action the same way other end-users would operate. Stakeholder testing is important as well, as you want to make sure that the program is running as it was originally proposed, but the real value comes from end-users that the application is being built for. When it comes to what kind of end-users are preferred over others, project managers want those with industry experience in the function the application is being developed for, rather than a completely random sample. However, users from a diverse set of company backgrounds are preferable to ensure that the program is accounting for operational use from a multitude of end-users.”

The final way that Hendlin’s team at OneSignal gathers feedback is by having actual conversations with their customers. By engaging with customers, product teams may learn where there are disconnects between users and the products, and what they can do to fix those.  

“Really understanding users comes from talking to them, observing how they interact with the product, analyzing where they were trying to do something but had a hard time, and seeing where they need to consult documentation or ask our support team,” said Hendlin. “There was a supreme court justice, Louis Brandeis who said ‘There is no such thing as great writing, only great rewriting’ and working on building a product and improving it is kind of the same way. As you get user feedback and learn more, you try to ‘re-write’ or update parts of the product to make them better.”

Anna Boyarkina, head of product at online whiteboard tool Miro, said that Miro also gathers feedback from a variety of sources, including support tickets, surveys, customer calls, and social media.

Product integration teams
With information coming in from all of these different sources, it’s important to have a process for handling and sorting through it all. Boyarkina explained that at Miro, there is a product integration team tasked with consolidating all of this feedback and then handing it off to the appropriate team. All of their feedback gets put into a customer feedback lake and then tagged. “For instance if it is a support ticket, it is tagged by a support team,” she said. “If it is from the call of the customer, there is a special form which is submitted by a customer success representative or sales representative, which also contains a tag.”

Koranne believes that feedback needs to be considered from a cross-functional perspective, because many applications in business don’t just affect a single team. For example, an HCM application might be used across HR, Payroll, and IT, so all three of those teams would need to be involved in the process of gathering feedback. “Conversely, the project management/development team would need a cross-functional setup as the feedback given may affect multiple layers of the application,” said Koranne.

According to Koranne, an ideal cross-functional team would consist of a business analyst, tester, and UI/UX developer to address feedback items. 

Prioritizing features
Once the information is with the appropriate team, that team needs to decide what to do with it. At OneSignal, the product team goes through and ranks feature requests on a five factor scale, Hendlin said. Those factors are frequency, value, strategic importance, and engineering effort. 

Frequency is related to how common a request is. For example, if a similar request is coming in hundreds of times, then that feature would be more highly prioritized than a feature that is only causing issues for a handful of users. 

They also look at the impact a feature has on a user. For example, a minor change to the UI would be lowly ranked, while seamless data syncing would rank highly, Hendlin explained.

The next two factors are considerations for the business side of things. The team considers what financial benefit there is to fixing something. In other words, would users be willing to pay for the feature? The team also considers whether a new feature drives new growth opportunities for the company.

Finally, the team looks at how hard the feature would be to build, as well as how much time and effort it would take to build it. 

“Once we weigh these attributes for a feature, we decide what to take on…and just as importantly, what not to,” Hendlin said. 

Gathering and implementing feedback is an ongoing process
The team at OneSignal works in weekly sprints, Hendlin explained. Before each sprint, the team meets and determines whether something that came up through user feedback ranks higher than what they had been planning to work on during that sprint. “We try to avoid major changes midway through a sprint, but we will re-prioritize as we learn more from our customers,” said Hendlin. 

Boyarkina’s team also prioritizes the information gathered from customer feedback. She explained that some feedback requires immediate attention, and that if it is a critical issue they have a 24 hour SLA for fixing it, so those issues get implemented right away. If it is a feature request, it gets moved into a backlog and discussed. 

The product team at Miro gets together on a biweekly basis and is given a report with user insights. On top of that, it holds monthly user insights meetings where it dives into what users are saying and any trends that are occurring.

When considering whether to implement feature requests, there are a few things Miro teams look at. First, they determine whether a feature aligns with their existing product roadmap. They also look at the frequency of a particular request. “If we see that it is something that appears more frequently and it is something that appears really painful, we are taking it into the next development cycle,” said Boyarkina. 

As soon as the team has a prototype of that feature ready, users who requested that feature are invited to participate in a beta for it, Boyarkina explained. Those users are also informed when the feature is actually released. “If we know who requested a certain feature we usually send a ‘hey we released this, you asked for that’ and it’s usually really pleasant for people,” said Boyarkina.  

Challenges of gathering user feedback
One of the obvious challenges of gathering and interpreting user feedback is being able to consolidate and sort through information coming in from different sources. 

“Even when an organization is able to successfully set up the technological capability, (not to mention the cultural support for), gathering continuous user feedback, it’s another task entirely to smartly parse that information, synthesize the insights, determine course of action, and then execute on them,” said Jen Briselli, SVP, Experience Strategy & Service Design at Mad*Pow, a design consulting company. 

Briselli went on to explain that viewing this as a challenge is a bit of a red herring. “Figuring out the most successful way to procure, interpret, and act on this feedback is less a function of logistics around how, and far more critically a function of internal alignment around why,” said Briselli. 

She believes that companies with the most success around this are the ones in which there is stakeholder buy-in to the idea. “Solving for the logistics of data collection and response, and translation for user requirements and development, all fall more naturally out of the process when leadership has bought in and invested in the outcome. From there, finding the methods that fit existing workflows and building the skill sets necessary for its execution resolve more easily,” she said. 

Mehdi Daoudi, co-founder and CEO of Catchpoint, agrees that a big challenge is the vast amount of data, but he sees this as an opportunity more than a challenge. “I think the fact that we have these different data sources make the data even more accurate because it allows us to not only connect the dots but validate that the dots are even correct,” said Daoudi. “So I think the biggest challenge is the amount of data, but I think there is an opportunity there just because of its richness as well.”

User Sentiment Analysis
The process of gathering user feedback should be tied closely to APM. According to Catchpoint, a lot of APM and network monitoring tools often forget about the “last mile of a digital transaction” — the user.

“Why do you buy expensive monitoring tools if it’s not to catch problems before users are impacted? That’s the whole point,” Mehdi Daoudi, co-founder and CEO of Catchpoint

User sentiment analysis is another element of user monitoring. User sentiment analysis uses machine learning to sort through all of the feedback that is coming in and interprets how a user might be feeling. “Because we are in an outcome economy, in an experience economy, adding the voice of the users on top of your monitoring data is critical,” said Daoudi.

As part of this user sentiment analysis, Catchpoint has a free website called WebSee.com, which collects and analyzes user sentiment data. The goal of WebSee is to enable organizations to better respond to service outages. End users can self-report issues they have on various sites, and that data is aggregated and verified by Catchpoint. 

According to Daoudi, user sentiment is a big part of observability. “People talk about observability but what are we observing? Machines? Are we observing users? It’s actually a combination of both these things and we are all trying to make the internet better for our customers and our employees and so observability needs to take into account multiple things including user sentiment.”

6 best practices for integrating design 
According to a recent study from Limina, only 14% of organizations consider themselves to be Design-Integrated companies. Design-Integrated organizations, according to Limina, are those that are “embedding a human-centered design culture into their organizations to gain both exceptional customer experiences as well as business and financial goals.”

“When the entire organization is focused on the needs of the user and the value their product delivers, better products are created and brought to market,” said Jon Fukuda, co-founder, and principal of Limina. “Strong alignment among cross-functional teams creates higher efficiency, working together toward the common goal of creating higher quality user-centered products, which leads to cost savings and increased revenue.”

According to Limina, there are three major barriers to becoming Design-Integrated: C-level support, human-centered design culture, and alignment of operations and metrics. The company offered up six best practices that companies should follow if they wish to become a Design-Integrated Business:

  • “Embed a human-centered design culture in every corner of the company, starting with the C-suite
  • Establish a common language to drive understanding, mitigate risks, and improve processes
  • Integrate design resources into relevant business functions
  • Capture specific metrics and manage them to bridge organizational divisions and drive business outcomes
  • Create reusable artifacts and repeatable processes
  • Invest in artifacts, then processes, then systems”

Source SD Times