Even if you can't divulge the password, you can still enter it... so if someone is actually in a position to coerce you, they're probably also in a position to make you enter the password for them. (It's damn hard to make an ATM that will give you your money when you want it, but also makes it impossible for someone to empty your account by waiting for you at the ATM and pointing a gun at you.)
And after skimming the paper, the only thing I could find in response to your point is:
Coercion detection. Since our aim is to prevent users from effectively transmitting the ability to authenticate to others, there remains an attack where an adversary coerces a user to authenticate while they are under adversary control. It is possible to reduce the effectiveness of this technique if the system could detect if the user is under duress. Some behaviors such as timed responses to stimuli may detectably change when the user is under duress. Alternately, we might imagine other modes of detection of duress, including video monitoring, voice stress detection, and skin conductance monitoring [8, 16, 1]. The idea here would be to detect by out-of-band techniques the effects of coercion. Together with in-band detection of altered performance, we may be able to reliably detect coerced users.
Of course, such changes could also be caused by being stressed in general. Even if you could calibrate your model to separate the effects of "being under duress" from "being generally stressed" in a particular subject, I would presume that there's too much variability in people that you could do this reliably for everyone.
Imagine how people would react to an ATM that gave them their money whenever they wanted it - except when they were in a big hurry and really needed the cash now.
(Blind Optimism) They'd learn to meditate!
But then, how do we stop people from being coerced in to meditative states... :(
In addition to what Kaj_Sotala said, there is already a much simpler, more reliable way to detect coercion on authentication: distress passwords!
My next step would be to game context dependent memory to make the memory unavailable under duress.
I've heard of some kind of security system whereas you can enter either the usual password or a “special” one, and if you enter the latter you're granted access but the police are alerted, or something like that.
The extension to that to an ATM might be one which gives fake bills, takes a picture, and alerts the police if the “fake” PIN is input.
For ATMs, the idea is out there, but it has never been implemented. Snopes on this:
The Credit Card Accountability Responsibility and Disclosure Act of 2009 compelled the Federal Trade Commission to provide an analysis of any technology, either then currently available or under development, which would allow a distressed ATM user to send an electronic alert to a law enforcement agency. The following statements were made in the FTC's April 2010 report in response to that requirement:
FTC staff learned that emergency-PIN technologies have never been deployed at any ATMs.
The respondent banks reported that none of their ATMs currently have installed, or have ever had installed, an emergency-PIN system of any sort. The ATM manufacturer Diebold confirms that, to its knowledge, no ATMs have or have had an emergency-PIN system.
I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:
I think the "stress detector" idea is one that is unlikely to work unless someone works on it specifically to tell the difference between "hurried" and "coerced", but I don't think the system is useless because it doesn't solve every problem at once.
OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.
you're not vulnerable to people ringing you up and asking what your password is for a security audit, unless they can persaude you to log on to the system for them
Easier to avoid with basic instruction.
you're not vulnerable to being kidnapped and coerced remotely, you have to be coerced wherever the log-on system is
Enemy knows the system, they can copy the login system in your cell.
OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.
Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.
It still means you need to be physically present and in an able condition.
The biggest flaw I can see is that it becomes trivial to forget your password. The system is thus only as secure as the backup system.
I think that the intention is to make forgetting your password as hard as forgetting how to ride a bicycle. Although I only remember the figure of '2 weeks' from reading about this yesterday.
It's only as valid as identifying someone by how they ride their bicycle. Any number of neurological factors, including fatigue, could change how someone enters the 'password' provided.
It's an interesting idea, to fight the standard social engineering attempts by hiding the password from the user. In a sense, all the conscious mind gets is "********". The paper is called "Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks". Here is a popular write-up and the paper PDF.
Abstract:
Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.
While this approach does nothing against man-in-the-middle attacks, it can probably be evolved into a unique digital signature some day. Cheaper than a retinal scan or a fingerprint, and does not require client-side hardware.