Encointer-Logo-160X100px

Human or algorithm? How Encointer can support better AI transparency

Human or algorithm? How Encointer can support better AI transparency

The proliferation of AI-based tools for creating visual and written content means we can no longer always trust that anything we see online has been generated by a human. Only a few short years ago, image manipulation was the preserve of those with Photoshop skills. Today, someone with no illustration skills can generate any type of image using just a few keywords. Natural language generation is now so authentic that around one-third of people cannot discern whether they’re talking to a human or a bot.

As AI makes it ever harder to discern legitimacy, and complicates interactions between individuals, our daily dealings online could become ever more murky. A simple proof-of-personhood such as Encointer’s could make it more difficult for malign actors to use AI impersonators.

 

Cloned attendees

One scenario where AI could fool humans into thinking that it is a legitimate human presence is in remote meetings. For example, employers have flagged that remote working has enabled a phenomenon whereby workers secretly juggle two or more full-time jobs, which could require attending multiple work meetings at the same time. Certain types of online meetings might also require individual participation, such as shareholder votes, where someone could manipulate the outcome by cloning their presence using an AI bot.

Encointer’s proof of personhood could be used to authenticate meeting participants as human. Thanks to blockchain consensus, the process can be made foolproof by introducing a lock that allows the authentication process or vote to happen only once for each participant at any time during the meeting. Therefore, the attendee would need to commit to a single interaction that can take place at any time during the meeting or agreed work time, meaning they would need to attend the entire session to be sure of authenticating their presence.

 

Social engineering

Social engineering attacks such as romance scams, where fraudsters pretend to be a love interest so they can convince people to hand over money, have become famous thanks to the success of Netflix documentaries like The Tinder Swindler. AI vastly increases the potential reach of such scams by allowing common tactics, such as spinning stories of financial hardship, to be deployed at scale using bots rather than humans. The advancing sophistication of natural language models means that more humans are likely to fall prey to such scams.

However, the use of AI in such scams can be prevented by making proof of personhood the standard for online interaction, in a similar way to how HTTPS replaced HTTP as a standard for online communication. Encointer could become the foundation for an authentication process where anyone can challenge another party to prove their personhood during an online interaction, making it far more difficult for AI agents to convincingly handle interactions. While such an intervention can’t prevent social engineering attacks, it can prevent the potential for the attacks to escalate exponentially through the application of AI.

 

Astroturfing online reviews

Online reviews have become a go-to method for ascertaining whether a product or service is up to scratch. From Amazon to TripAdvisor, the process is fraught with issues due to the fact that people try to game the system. Famously, a writer for Vice managed to get his garden shed to the coveted top spot of London restaurants simply by posting fake reviews.

AI can easily be used to flood online review sites with fake reviews posted by bots, further compounding the issue. The fact that AI language models are becoming more sophisticated means that site operators can’t rely on weeding out fake reviews using instantly-outdated language-detecting tools. Instead, making proof of personhood a baked-in requirement for leaving an online review is a far more robust way of preventing comment bots.

 

Big problems, simple solutions

These are just a few scenarios where the ability to prove personhood could help to prevent AI from being used for harmful purposes. As with any technological development, there is no putting the genie back in the bottle, and there are no silver-bullet solutions that can address every scenario. Instead, regulators and innovators will need to develop a robust set of rules and tools that can help to improve online transparency and tackle the use of AI in ways that may be unethical.

AI regulation is likely to try to balance the potential of AI to trigger innovation and opportunity for business with the legitimate concerns of users regarding transparency, safety and accountability. This may involve making certain AI code and trained models open source so that they can be publicly scrutinized and, in cases where legally relevant decisions are assisted by AI, mandating that a human decision-maker should sign off on them.

For example, open-source code should be a minimum requirement for industries and processes where AI is used in decisions that affect humans, such as insurance claims or hiring. In such instances, the need for AI transparency needs to be balanced with the data privacy rights of individuals. A hybrid of blockchain technology and trusted execution environments could be used to create this balance. 

However, regulation will never be a catch-all solution. We will also need a dependable means of identifying whether the parties to any given transaction are human or AI. Machines like locomotives and chainsaws feature a safety device called a “dead man’s switch,” which prevents them from continuing to run if the operator becomes unconscious or incapacitated. In an enterprise environment, a proof-of-personhood could be used similarly, preventing an AI from running out of control without human input. If linked with a user account, proof of personhood can also serve as a digital signature for AI-enabled transactions, ensuring that there is a legal accountability for any decisions taken.

Now that we have tools to verify personhood, it should become a standard for transacting online. Combined with pragmatic regulation that requires firms and individuals to identify how they are using AI, we can envision a future where humans can begin to interact with algorithms more confidently.

Photo by Waldemar on Unsplash

© 2022 All Rights Reserved