TL;DR: Tokenising human verification could lead to markets which give us real-time information about the strength of such methods.

Problem

Sometimes it’s important to know whether you’re dealing with a human or a bot. For example, some web services are happy to give humans a certain amount of access for free, but don’t want to waste their time with scrapers because serving spam can be expensive for no reward. The importance of being able to tell the difference increases as the importance of the service increases—compare spam prevention for a SaaS company with a freemium model versus an online voting system for a national election.

Luckily there are services available which help with this problem. Google’s CAPTCHAs are notorious but prevent many bots from accessing services intended only for humans, and its reCAPTCHA v3 simply returns a “humanity” score to the web service rather than interrupting the user’s interaction. Cloudflare is another popular service which blocks bots. These services are centralised, and there is little transparency into how they function, how their success rate changes over time, or whether certain kinds of bots are able to cheat them reliably.

There are also attempts to provide human verification in a decentralised way. For example, Idena is a blockchain project which gives network participants a series of user-generated “flip” CAPTCHAs within a globally synchronised time window of 2 minutes. Only users who successfully solve the CAPTCHAs are considered humans until the next validation period (the standard of success is determined by network consensus), and because of the short time window it is assumed that no one human can successfully validate more than one account. DemocracyEarth is developing a system which assigns blockchain accounts a “humanity” score based on their social graph and engagement with decentralised autonomous organisations (DAOs).

The problem is that CAPTCHA tests and score-based methods are brittle. Google’s CAPTCHAs have been broken numerous times. Behaviour-based tests get into trouble as more sophisticated AIs are able to simulate the “human” behaviour the test looks for. If your web service relies on any one of these human verification methods to filter out bots, then as soon as the bots find a method to defeat the verification method your service is left with no protection until a fix is found, or you switch to another verification method. Switching is costly, not least because it requires you to research the alternative verification methods available to you along with their strengths and weaknesses.

Proposal

A human could be verified in the form of a non-fungible token, issued (eg on a blockchain) to that human by a verification service upon completion of some test (eg a CAPTCHA or exhibiting the right sort of behaviour). The token would record who issued it, who received it, and when it was issued. Then when a service wants to check that a user is human, they could ask the user to burn a token representing a verification (perhaps specifying that the token was issued within a certain frame). If that service already knows which verification services they trust they could check that the burned token comes from one of their trusted verification services and have a system much like we already have, except that a web service would be able to configure the verification services they trust with a simple set of trusted blockchain addresses, rather than having to integrate a new system for each service.

However, tokenising human verification would also enable a marketplace for those tokens. For some services, it will be important that a token is never transferred before being burned for access rights, because there is no assurance that the recipient after the transfer is not a bot. However, if there also exist services which do not have such a strict requirement—perhaps they need some degree of spam prevention but a certain level of bot activity is acceptable—then the humans who are verified could sell their tokens to other users. The value of the token would be derived from its ability to grant access to services which accept that token as a proof of humanity.

If there is a functioning marketplace for these tokens, then we could expect the market value of a token to act as a strong indicator of the value of the issuing service as a human verification method at the time the token was issued. In fact a web service, without directly knowing anything about particular verification services, could simply ask for tokens to burned which add up to a given dollar value and check with a marketplace (eg via an API) when that threshold has been crossed by the user.

Such market data could also be a valuable source of information for AI security research since we would gain a degree of transparency into the effectiveness of the various approaches to distinguishing humans from bots.

Limitations

  • General problem of usability (how to make the end user experience smooth).
  • General problem of bootstrapping functioning marketplaces.
  • There’s a particularly hard bootstrapping problem if the value of the marketplace relies on web services accessing reliable market data from the marketplace.
  • It’s unclear if many services would be willing to accept tokens which have been transferred, which is essential for the viability of the marketplace.

Proof of concept

https://github.com/willclarktech/personhood-nft

New Comment
4 comments, sorted by Click to highlight new comments since:

I'm pretty sure the concept of defeating spam by making emails cost 1¢ to send is an ancient one - I can't remember where I first encountered it. The hard part seems to be that the difference between "free" and "1¢" has been huge enough to deter most human users. I think we're slowly chipping away at this problem both through microtransaction technology and very slow cultural change.

Yeah, I only really mention spam because it's the one obvious use case I could think of and people are already familiar with it. It seems like spam is mostly solved anyway in many domains, so if that's the only thing this proposal can solve it's not much use.

I'd argue that "personhood" is rarely what these things actually care about - it's just a cheap-to-measure proxy for "likelihood of conversion to sale" or "amount I'd get paid for an ad" or the like. A bot that can enter into contracts and is more likely than a real person to make a purchase would be welcomed, but there are few of them and there's no good test of it.

For actually valuable things, a bot could just pay humans to pass the captcha and all would be well. Shadier bots could man-in-the-middle pretty easily if they just pass through a captcha on their cat picture site.

For implementation, it's worth looking at the OAuth specs and common federated authentication systems that google, facebook and a number of other sites provide - those do NOT assert human-ness, they assert authenticated account identity, but for most uses, that's a better proxy anyway. In cases where it's not, you could build a provider that uses OAuth to assert humanity using whatever verification it likes.


I'd argue that "personhood" is rarely what these things actually care about

This is probably true. Maybe the best use case is actually the opposite of preventing bots: enabling good bots who can't pass CAPTCHAs to access services they need (by paying humans to let them in).