One human, one account — How Encointer can help tackle review and comment fraud

One human, one account — How Encointer can help tackle review and comment fraud

When trying to form an opinion about a product or service, we often find ourselves gazing at the stars on platforms like Amazon or TripAdvisor to help make up our minds. While user reviews and comments are the backbone of most Web2 Internet platforms, their veracity is being increasingly called into question by the growth of paid comment farms, social bots and natural language AI tools. Encointer offers a unique way for users to tackle the problem and take back control of online discourse.    

 

Backed by a tailwind of glowing five-star reviews, the Shed at Dulwich quickly became the top-rated eatery in London on TripAdvisor. Through a polished website and slick social media operation it promised stylish, contemporary dining experiences tailored to your mood, such as “lust” or “contemplation”. There was only one problem. It didn’t exist.

The restaurant, website and reviews had all been fabricated by Oobah Butler, a London resident on a mission to demonstrate how easy it is to game online review platforms. Together with a few friends, he used fake five-star reviews to manipulate the TripAdvisor ranking for his fictional establishment, before publicly revealing the deception and conducting a round of press interviews highlighting the problem. It was a poacher-turned-gamekeeper case: years earlier when living with his parents and strapped for cash, Butler had worked for an online vendor who paid him £10 per review to write up glowing appraisals of London restaurants he had never been to. Now, he was exposing the charade for what it was.

 

Comment fraud: A problem that affects every aspect of online discourse

Review fraud is by no means limited to the restaurant and hospitality business. On any platform or forum where people can voice their opinions online, doubt is increasingly being cast on their legitimacy. Oftentimes, there are direct monetary benefits for those involved. Private groups on platforms like Facebook and Telegram with thinly veiled titles such as  “R**fund Aftr R**vew” act as a recruiting ground for those willing to write five-star reviews on platforms like Amazon, TripAdvisor or Yelp in exchange for free products and/or cash incentives. Review fraudsters typically use numerous alias accounts to avoid arousing suspicion and separate the illicit activity from their primary identities.        

In other cases, the motivations are political. In a 2020 article titled “Bots Are Destroying Political Discourse As We Know It”, privacy specialist and writer Bruce Schneier detailed how politicians and foreign actors were commanding armies of chatbots to distort political interaction online. A 2019 study into the practice found evidence of an international black market for reusable political disinformation bots, noting that hundreds of Twitter accounts which shared alt-right content during the 2016 US Presidential election switched focus and started sharing rumors and disinformation about President Macron during the 2017 presidential elections in France.

 

Human voices in danger of being crowded out by AI

Estimates vary as to the scale of comment fraud. Amazon, for example, contended that in 2018, less than 1% of reviews on the site were inauthentic. The fraudulent review detection service Fakespot, however, claimed that 42% of the 720 million Amazon reviews it assessed in 2020 were fake. In political discourse, rumors and misinformation are often spread by a complex interplay of bots, politically motivated actors, and real people who are likely unaware of the origins of the content, making it hard to derive a precise number that captures the full scale of the problem.

In the past, it was relatively easy to spot fake reviews and comments on the basis of their linguistic content. Many different accounts would post almost identical five-star reviews of the same product or service, for instance, suggesting a coordinated bot operation. In other cases, reviews would be very brief, comprising non-specific statements like “excellent” or “great”, without any details about the product or service in question. In the political and social arena, bots sometimes repetitively posted and retweeted a limited range of slogans and hashtags with little nuance or variation. 

But with the sudden widespread availability of large-language artificial intelligence models like Chat-GPT and Google Bard, it has become trivial to create many linguistically correct and believable reviews and comments at the touch of a button. Practically overnight, many of the most widely used algorithms for fighting review fraud and disinformation, which analyze and look for patterns of speech in suspect comments, have become ineffective. 

In response, researchers and platforms have looked for ways to flag and ban reviewers who have been recruited through private groups. Amazon even went as far as suing the admins of more than 10,000 Facebook groups which it alleged were involved in soliciting fake reviews. Ultimately, however, this is a game of regulatory Whack-A-Mole, with groups that are closed down quickly reappearing in a slightly different guise. And while Twitter is ostensibly charging $8 per month to prevent spam accounts, this would pose no real barrier to well-financed bad actors.

 

Bringing humanity back to the fore 

In concluding his article about bots and disinformation, Schneider strikes a resigned tone, stating: “We have to recognize the limitations of online political conversation, and again prioritize face-to-face interactions. These are harder to automate, and we know the people we’re talking with are actual people. This would be a cultural shift away from the internet and text, stepping back from social media and comment threads. Today that seems like a completely unrealistic solution.”

 

Indeed, this solution sounds onerous, requiring people to only trust people who they can meet in the real world. While the internet promised to create a “digital town square” in a global village, bots seem to be forcing us to consider a radical departure — returning dialogue to physical locations. 

 

But what if we could create a nexus between the physical and online world, allowing us to know we are interacting with real people in online settings? At Encointer, we are creating a badge for social media platforms, starting with Nostr, that will guarantee that a user is a single account, without requiring them to provide any ID or reveal personally identifiable information.

 

So how does it work? Users download the Encointer app, which is currently available for Android or iOS smartphones. In the app, participants in each Encointer community are randomly assigned to periodic small, physical gatherings at a randomly selected location in their local area. When it commences, each participant scans the QR code of the other attendees to confirm their presence and the attestations are recorded on the highly secure Kusama blockchain ledger. As all Encointer gatherings take place at the same time, it is impossible to attend more than one event during each cycle. This makes it impossible to validate more than one Encointer account. 

By taking advantage of the trusted-execution environments provided by our partner Integritee, we will make it possible for users to create an unlinkable connection between their Encointer account and their social media profiles. The end result will be that people can display a badge on their social media accounts to prove that they are controlled by a unique individual. This proof-of-personhood badge will greatly enhance a person’s credibility when they comment or leave reviews online. Conversely, malicious actors or bots would no longer be able to post multiple reviews or comments using separate aliases.

Beyond personhood, Encointer provides a broad indication of the geographic region in which a user account is located. This enables comments to be ranked or filtered by region. For example, users could choose to filter reviews of a restaurant to those written by locals. Platforms may even choose to de-emphasize foreign comments or label them to provide more context for consumers. Similarly, a regional newspaper could prioritize comments written by readers actually living in the area. As each participant needs to attend in-person gatherings to validate their account, their region cannot be obfuscated using a VPN.

While attending occasional gatherings requires some commitment from participants, it requires a far less seismic change to our culture than moving all discussion offline. Ultimately, by linking the trust of the physical world with online profiles, decentralized proof-of-personhood protocols like Encointer could help to enable humans to take back control of online discourse and banish the fraudsters and bots to obscurity.   

 

Photo by Elin Tabitha on Unsplash