Human-centered security is distinct from other human-centered design challenges because security risk is heterogenous, the harm stochastic, and the connection between risk and harm sometimes invisible.
Why is usable security more than usability, and sometimes not aligned with traditional usability? In part, because individuals rarely want to perform security per se. Security is not the desired goal of the individual. In fact, security is usually orthogonal and often in opposition to their actual goal. To address this we strive for translucent security.
Translucent security is not simply usable security. It is not default security. Translucent security is cooperative security based on risk communication, with computers and humans as partners. People understand the context in which they work, security engineers understand the risk. Security communication is risk communication. Risk communication is best when context dependent, designed for the task, the risk, and the person.
Since individuals must trust their machines to function, risk communication itself may undermine the value of the networked interaction. For the individual, discrete technical problems are all understood under the rubric of online risk (e.g., privacy, security).
The stochastic nature of security (and risks in general) make common usability approaches inapplicable. Making clear the connection between actions and consequences is particularly difficult in security and privacy. Risk is inherently probabilistic. There may be no consequences. Consequences could very likely be delayed. Consequences may prove catastrophic. And if the consequences could be determined, the action-risk-consequence information may be overwhelming.
Fundamentally, security information is about risk and threats. Such communication is most often unwelcome. Increasing unwelcome interaction is not a goal of usable design, but warnings must be part of security.
☎ 1 (812) 856-1865
⌂ Informatics West, 901 E. 10th Street, Bloomintgton, IN 47408