Authenticity is at the forefront of every consumer buying decision, thus making reviews a barometer for a product and a company’s trustworthiness. Forty-nine percent of consumers trust reviews as much as personal recommendations from friends and family members, creating a level of confidence with direct revenue implications.
Marketers have made leaving product reviews extremely accessible to capitalize on their power, pulling from multiple sources to stack e-commerce sites with first-hand accounts. This means there are a variety of ways for consumers to share their experiences on a brand’s most visible platform, its website. However, there are also ample opportunities to post illicit, graphic and harmful content that could erode shoppers’ trust.
Without proper guardrails in place, reviews filled with profanity and inaccurate brand representation will overwhelm retail sites and negatively impact the user experience. Content moderation is necessary to ensure reviews remain helpful, nonharmful, and maintain a seamless experience that retains customer trust.
Careless Reviews Aren't Just Annoying, They’re Dangerous
Recent research from the American Marketing Association suggests customers who receive an incentive are more likely to write positive reviews. Therefore, as brands rush to incentivize them, consumers flock to write reviews, regardless of their experience. Because of this, many reviews are absentminded. Instead of a nicely crafted, honest review, they might copy and paste song lyrics with hidden explicit language or leave a placeholder like “lorem ipsum” just to fill the requirement.
A lack of viable reviews makes consumers question the ability of the brand to complete orders and deliver as promised. Moderation services, however, can pick up on these types of reviews, using artificial intelligence and or humans depending on the complexity of the threat, filtering out useless filler and ensuring reviews remain a trusted source for consumers.
It’s important to remember that moderation services aren't meant to remove critical reviews. Retailers never want to see their brand written about in an unfavorable light, but these reviews are also necessary for authenticity. Forty-five percent of consumers are suspicious when a review is overboard in its praise, so only leaving positive reviews will create reservations for consumers.
The Medium Matters
Brands are moving beyond text to allow consumers to submit video reviews, giving other customers a look inside real-life product or service usage. Videos are even more impactful than words. Seventy-seven percent of people who have seen a testimonial in this format say it motivated them to make a purchase. Given such influence, it’s imperative that these videos are on brand and don't contain any illicit content lurking in the background or intentionally placed.
Video submissions are far more nuanced than the typical text review. While AI can assist by analyzing video frames and audio, this content type tends to lean more heavily on human moderation, as harmful material may not be as clear cut as text reviews. Bad actors are always coming up with new and creative ways to bypass AI. Having humans in the loop to review any suspicious content flagged by AI, such as a product being used inappropriately or hiding R-rated or harmful content in the background, is needed to ensure all forms of reviews are safe for users.
Sixty-three percent of consumers expect brands to address inappropriate or harmful content on their pages within an hour of it being released. Therefore, if retailers want to provide the best experience possible, they need to be armed with all methods of moderation.
Moderation Means Consumer Insights
Moderation can also provide insight into what consumers want. For example, if multiple reviews mention a love of the product and a wish to purchase it in more colors, retailers know just where to focus their future efforts. Likewise, if multiple people report an item being smaller than expected, a size guide can be altered to be more accurate for future purchases. There are multiple ways to gather keyword data from reviews through moderation, allowing brands to make the most informed decisions around consumer pain points.
With customers placing a premium on user-generated content (UGC) like reviews, retailers need to facilitate these channels while still ensuring all first-hand accounts are safe for shoppers and accurate to the brand. One in three customers will leave a brand they love after just one bad experience, so having reviews that aren’t moderated, meaningless, and don’t provide first-hand insight will result in frustrated customers and money out the door.
The most advantageous form of UGC in retail can easily become destructive to a positive customer experience without the proper guardrails. Moderation services are imperative if retailers want to maintain their brand’s authenticity and trust in a crucial part of the customer journey.
Joshua Buxbaum is the co-founder and chief growth officer at WebPurify, a text, image and video moderation tool.
Related story: Outsmarting the Devil: How to Add Product Personalization Features Without Jeopardizing Brand Image
Joshua Buxbaum is the Co-founder and Chief Growth Officer at WebPurify. With over 17 years of expertise in trust and safety, he fuels WebPurify’s hands-on approach to client relations with a strong commitment to customer satisfaction. He works closely with professionals across numerous industries, helping them maintain a stellar brand reputation and take responsibility for keeping kids safe online. Buxbaum graduated from The George Washington University with a degree in communications. He is a board member and active volunteer for United in Harmony, a nonprofit serving homeless children in Los Angeles.