What is a listening post?
And why is it useful in user research?
The definition of ‘listening post’ when I asked Google was “a station for intercepting electronic communications”. I suppose what I mean when I use this term, is to refer to any way that a business captures user feedback (that is freely given! I don’t mean shady companies that read your emails!).
Most tech businesses have a multitude of ways of soliciting user feedback, either directly or indirectly, through platforms such as Trust Pilot, asking users to give an NPS rating, or through their customer services department.
Basically, a listening post is any way you can check in and ask: “What are people saying about us?”
Listening posts as a user research tool
As a researcher, any source of user data is a welcome addition to your tool kit. Using feedback from these sources can often alert you and your team to problems occurring on the user journey, or add additional weight to problems you are aware of (which can be useful to leverage if you need stakeholder buy-in to get approval to tackle a specific area next).
Knowing how to collate, review, and report themes from listening posts is a good skill to develop, so I’ll run you through how my research team and I do it.
Caution
Firstly, a quick word of warning. Using feedback from these channels is usually approached with an air of caution; complaints channels are full of very loud negative voices, and feature requests in NPS comments can wander into the territory of “asking for a faster horse” (as the extremely famous Henry Ford quote goes).
Being aware of the type of feedback you’re getting, and from whom should enable you to be objective enough to know how to apply the insight you’re getting.
Audit
A good place to start is by doing an audit of where user feedback comes from in your business, who ‘owns’ it, and how do they process it. It won’t always sit directly with product or design/ research, so that’ll mean going out and making new friends in your organisation (which is never a bad thing!).
If for any reason your business isn’t currently collecting feedback, or doesn’t have much of it, you could instigate some of the following methods are set up.
Here’s a quick list of potential listening posts to look out for:
NPS - usually a pop up or exit survey asking users to rate on a scale of 1-10 how likely they would be to recommend your product or service, often with a free text field
Trust Pilot - a third party review gathering service allowing users to leave feedback comments and ratings on a wide variety of businesses
App Store/ Play Reviews - reviews left on the android and iOS app download locations
Customer Services - a department in your business, big or small, that directly faces customers, dealing with their complaints and enquiries
Intercom/ Contact Us - another method of allowing customers to speak to your business
Feedback Button - a button somewhere on your website that allows users to submit feedback
Aggregate
The next step to working with listening posts is to find a simple and easy way to aggregate all the insight that comes in through these channels.
In my experience at least, this could be super simple to really complicated, depending on what services you’re working with. Some tools allow you connect their output to another product, such as Slack or Airtable which can be a very simple way of sucking all the data into one location to make reviewing easier. Others, not so much!
There is always a way to work around this though.
For example, whilst working at Zoopla, our customer service team were exposed to product specific user feedback through calls and emails, but weren’t recording this anywhere. I worked with the Head of CS to develop a simple Google Form that could be easily completed by team members whilst dealing with a customer. I then linked the output from the form to a Slack channel open to everyone in product. This democratised that insight, giving anyone who was interested direct access to the user voice. I checked this channel every morning whilst having my coffee to keep myself abreast of the thoughts and feelings of our users.
Review
Once you have audited your listening posts, and centralised the data coming in, the next stage is to implement a review process. You’ll need to decide how frequently you review the data, and this will largely depend on the type of listening post. For example, I have found that NPS free text and Feedback Button data can be useful at alerting product teams to bugs that might have slipped through or other issues that might be critically impacting the user experience, therefore you might decide that you want to keep a closer eye on these areas than say Trust Pilot or App Store reviews. Having a live feed that you can glance over daily, such as a Slack channel is a great way to ensure you can catch those big events as they happen whilst continuing regular full reviews at less frequent intervals.
You’ll need to decide on a tag or attribute system so that you can keep track of what the feedback relates to, and easily quantify your results. My suggestion would be to tag feedback in terms of which product, stage of the user journey, and user type its pertinent to. You might also wish to give data classifications based on if its a complaint or suggestion etc. If you have another team recording the data for you (as with my customer services Google Form example), you could get those people to do some of the tagging for you, although be aware this could introduce some bias/ skew in the data and you’ll need to ensure those involved are trained up on your process. Organising your listening post data in this way will allow you to pull useful stats such as: “In April 80% of user feedback comments were from renters who were experiencing difficulty with contacting an estate agent”, and even create useful graphs and tables.
One other thing thats really useful to think about in the review stage, when tagging and giving attributes and classifications to listening post data is what I call ‘half-life’, or data decay. This essentially is about how long the data you have captured will last for before it becomes obsolete. If for example, you redesign an entire page of your website, or completely change some functionality in your product, any data before the time of the change may no longer be relevant going forward. You may however wish to call on that data going forward, if you ever need to reference the ‘before’ state. So my advice is don’t delete it, just come up with a useful way of tracking half life, and be sure to be observant of ‘old’ insight when looking back over time.
Reporting
So far so good, you know exactly whats coming in and from where, and you have a streamlined and efficient way of reviewing and coding your listening post data to make it easy to use and understand. Now, the most important bit, you need to get those insights in front of the people who can make use of it.
A great way to do this is a monthly round up, a report where you can share those juicy insights, and give your team a feel for what users have been saying over the last 30 days.
I’m super passionate about communication of research insights, and amplifying the user voice, and so I am really proud of a method of communicating listening post knowledge I developed whilst working for Zoopla, a UK based property portal. I started producing a monthly magazine, both digital and in print format to share around the business with people both in product, but also across the business in marketing, sales, finance, HR, and also our customer service team (who reported back that they loved seeing the result of their effort capturing this data for us!). The Z/Insight Magazine (as I called it) contained findings from across the user research department, from all teams and all studies, but specifically featured a section focused on listening post insights - often with quotes directly from free text fields.
Another way to report the listening post insights is alongside product specific research reports. Often if one of my team is writing up a report on formative research findings they’ll include quotes from the listening post data set in their extant review section. Listening post data can also be used as a benchmark - as I mentioned before with the data ‘half-life’ attribute, in the summative phase of a project, you can look back at feedback over a period of time and see how it has changed for the better or worse, reporting the before and after user comments.
To recap
Listening posts are any method you have of capturing user feedback, and are a good way of checking in and asking: “What are people saying about us?”
You’ll first need to audit where this feedback comes from.
Then you’ll need to aggregate all that data to a central (and if possible visible) location.
You’ll need to set up a review process so that you can classify and give your data attributes, allowing you to find themes.
And most importantly, find an engaging and relevant way to report all that insight back to the people who need it.
And finally, things to watch out for
Generally, the research team don’t own any of the responding part of the listening post work. It is absolutely essential (in my opinion) that someone takes the time to respond to negative Trust Pilot reviews, and replies to emails from your ‘Contact Us’ channel. However in my experience, its best for this responsibility to sit with a team such customer services.
Why?
Your customer services team are trained to trouble shoot and help customers and users overcome their issues. Those teams are better placed to deal with customer enquiries, and respond to complaints, and have the time to do so. As researchers our skills are more focused to review, analysing and reporting insights. If your team is anything like mine, you’ll already be snowed under with research work to tend to, and wont have the space to take on this responsibility as well.
Although we can make use of customer feedback, I would advise if possible that you don’t take on the responsibility of responding to it.