Why Yelp Is Fighting Fake Reviews with Public Shaming

Updated
Yelp fake reviews
Yelp fake reviews

In its quest to keep its reviews free of bribery and bias, Yelp last week turned to a new weapon: public shaming. The company announced Thursday that businesses found attempting to buy positive reviews will have their pages branded with a consumer advisory informing readers of their chicanery.

The value of social review sites like Yelp (YELP) rests on the notion of the wisdom of crowds: Rather than relying on a single "expert" restaurant critic or hotel reviewer, you draw on the opinions of dozens or hundreds of regular folks who have patronized a given establishment. While there will inevitably be a few outliers on both ends of the spectrum, averaging the reviews should give you a pretty good sense of the place's quality.

But that only works if all the reviewers are honest and objective. If businesses are writing glowing reviews of themselves -- or paying others to do so -- it introduces bias into the system and undermines its credibility. That's why Yelp has taken steps in the past to keep its reviews clean, implementing a filter that removes questionable reviews from the calculation.

But last week's announcement takes that fight to a new level: If Yelp finds evidence that a restaurant, hotel or other establishment has attempted to pay for positive reviews, it will get slapped with the review site equivalent of the scarlet letter. The mark of shame will stay in place for 90 days (assuming Yelp receives no evidence of further review-buying), and will present users with evidence of the business's attempts to solicit positive reviews. Stephanie Ichinose, a spokeswoman for the company, says Yelp employees will investigate by responding to solicitations offering to pay for reviews -- essentially going undercover to catch businesses in the act.

A Growing Problem

While Yelp already had a filtering program in place, it's not hard to see why the company decided that more decisive action was needed to address the growing problem of fake reviews. A recent report by research firm Gartner estimated that by 2014, 10% to 15% of all social media reviews will be fake -- a number that includes paid Facebook Likes and Twitter follows. And just as importantly, consumer awareness of the issue is also on the rise, says Jenny Sussin, the report's author.

Sponsored Links

"I think that we've reached a critical mass of people that just distrust social review sites in general," she says. Yelp, in other words, needed to show concerned users that it was addressing the problem beyond its existing filters.

And those existing filters and quality control procedures -- at Yelp and other sites -- just aren't adequately equipped to detect every fake review. A Cornell University study assessing the fake review problem found that human test subjects performed poorly when asked to spot deceptive reviews, so simply having editors hunt for suspicious-looking raves isn't a reliable option.

Myle Ott, the paper's author, has worked to develop an algorithm capable of detecting deceptive reviews based on certain cues found in the text of the review -- as he explains, "When people are lying, it changes the way they use language." But he says that as far as he's aware, no company is currently deploying such an algorithm to police its reviews.

Circumstantial Evidence

Without such an algorithm, says Sussin, companies are left with systems that make use of clues outside the text of the review. For instance, she says, Yelp tends to filter out reviews from accounts that look like they were created for the sole purpose of reviewing that establishment.

"Yelp does a lot of things that I recommend consumers do," she says. "If you don't fill out your entire profile when you register, it automatically flags you." She adds that the more reviews you have under your account, the less likely you are to be filtered, as the system will view you as a member in good standing rather than as a suspected dummy account. (Yelp would not disclose the details of its filter, noting that doing so would allow people to more easily circumvent it.)

Another strategy is to determine the source of a review by tracking the commenter's IP address. If a company detects that reviews from multiple user accounts all originate from a single address, it suggests that someone has set up numerous dummy accounts from a single computer. In announcing its consumer rating program last week, Yelp also said it plans to start using IP address-tracking to make sure that one person isn't using multiple accounts to influence an establishment's rating.

But perhaps the most effective strategy is to simply confirm that the person writing the review has actually visited the establishment or purchased the product that he or she is reviewing. On Amazon.com (AMZN) for instance, reviewers who actually purchased a product through the site are tagged with the label "Amazon Verified Purchase" next to their review. Expedia (EXPE), meanwhile, verifies that people who review hotels booked their stays through the site.

But neither Yelp nor social travel reviewer TripAdvisor (TRIP) make restaurant reservations or book hotel stays -- they're purely review sites. That means anyone can claim to have visited a restaurant or spent a weekend in a hotel, and the sites have no way of confirming they're being truthful.

"In some sense, there's a cost to posting fake reviews -- a tangible cost, if you have to book a room," says Ott. "But since [Yelp and TripAdvisor] can't increase the cost by requiring people to book through them, this [consumer advisory] is raising the reputation cost." The hope is that the shame of having the red flag show up on a business's page will outweigh the half-star bump that it can get by paying for a bunch of glowing reviews. ­­


Indeed, TripAdvisor already has such a system in place -- on this hotel's page, for instance, an advisory warns: "TripAdvisor has grounds to investigate that individuals or entities associated with or having an interest in this property may have attempted to interfere with traveler reviews and/or the popularity index for this property."

False Accusations?

The danger with such a strategy is that innocent businesses could get caught up in the dragnet, and the last thing Yelp wants is to falsely accuse a business of bad practices. And now that Yelp's new system has been publicized, it's possible some businesses might attempt "false flag" operations, in which a restaurant proprietor attempts to frame a rival for soliciting reviews.

Ichinose says that all cases are extensively researched before a consumer advisory is posted, lessening the likelihood of a false accusation. And a spokesperson for TripAdvisor says that the advisories, which are found on "only a fraction of a percent of the hotels" on the site, are only used "in cases where there is firm evidence supporting suspicions of abuse."

Still, Sussin says that the program could backfire if there are cases of innocent businesses getting branded with Yelp's new scarlet letter.

"What if they get it wrong for one business?" she says, suggesting that other review sites will be watching to see how the program fares. "And if it's multiple local businesses who share a similar customer base, and Yelp develops a reputation for falsely flagging businesses, that could be a problem."

Related Articles



Matt Brownell is the consumer and retail reporter for DailyFinance. You can reach him at Matt.Brownell@teamaol.com, and follow him on Twitter at @Brownellorama.


Get info on stocks mentioned in this article:

Advertisement