NDSS 2015 – USEC Programme
USEC Workshop Overview
Many aspects of information security combine technical and human factors. If a highly secure system is unusable, users will try to circumvent the system or move entirely to less secure but more usable systems. Problems with usability are a major contributor to many high-profile security failures today.
However, usable security is not well-aligned with traditional usability for three reasons. First, security is rarely the desired goal of the individual. In fact, security is usually orthogonal and often in opposition to the actual goal. Second, security information is about risk and threats. Such communication is most often unwelcome. Increasing unwelcome interaction is not a goal of usable design. Third, since individuals must trust their machines to implement their desired tasks, risk communication itself may undermine the value of the networked interaction. For the individual, discrete technical problems are all understood under the rubric of online security (e.g., privacy from third parties use of personally identifiable information, malware). A broader conception of both security and usability is therefore needed for usable security.
USEC 2015 aims to bring together researchers already engaged in this interdisciplinary effort with other computer science researchers in areas such as visualization, artificial intelligence and theoretical computer science as well as researchers from other domains such as economics or psychology.
Sunday, February 8
|8:45 AM||Opening Remarks: Welcome Message from the chair, Jens Grossklags|
|8:50 AM||Session 1: Access Control and Authentication|
|10:30 AM||Session 2: Passwords and 2-Factor Authentication|
|11:45 AM||Session 3: Design Approaches for Security|
|1:30 PM||Session 4: Security in Organizations and Decentralized Settings|
|3:15 PM||Session 5: Fraud, Security Threats and Countermeasures|
|4:50 PM||Keynote: Workshop Keynote by Matthew Smith|
|5:40 PM||Closing Remarks: Closing Comments from the chair, Jens Grossklags|
They brought in the horrible key ring thing! Analysing the Usability of Two-Factor Authentication in UK Online Banking
To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions – especially those adding a token – to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience.
Kat Krol, Eleni Philippou, Emiliano De Cristofaro, M. Angela Sasse
Web services are increasingly adopting auxiliary authentication mechanisms to supplement the security provided by conventional password verification. In the domain of social network based web-services, Facebook has pioneered the use of social authentication as an auxiliary authentication mechanism. If Facebook detects a user login under suspicious circumstances, then users are asked to verify information about their friends (in addition to verifying their passwords). However, recent work has shown that Facebook’s social authentication is insecure. In this work-in-progress, we propose to rethink the design of social authentication. Our key insight is that online social network (OSN) operators are privy to large amounts of private data generated by users, including information about users’ online interactions. Based on this insight, we architect a system for social authentication that asks users to verify information about their social contacts and their interactions. Our system leverages information protected by privacy policies of OSNs to resist attacks, such as questions based on private user interactions including exchanging messages and poking social contacts. We implemented our system prototype as a Facebook application, and performed a preliminary user study to evaluate feasibility of the approach. Our initial experiments have been encouraging; we find that users have high rates of recall for information generated in the context of OSN interactions. Overall, our work provides a promising new direction for the secure and usable deployment of social authentication.
Sakshi Jain, Neil Zhenqiang Gong, Sreya Basuroy, Juan Lang, Dawn Song, Prateek Mittal
While biometrics have long been promoted as the future of authentication, the recent introduction of Android face unlock and iPhone fingerprint unlock are among the first large scale deployments of biometrics for consumers. In a 10-participant, within-subjects lab study and a 198-participant online survey, we investigated the usability of these schemes, along with users’ experiences, attitudes, and adoption decisions. Participants in our lab study found both face unlock and fingerprint unlock easy to use in typical scenarios. The notable exception was that face unlock was completely unusable in a dark room. Most participants preferred fingerprint unlock over face unlock or a PIN. In our survey, most fingerprint unlock users perceived it as more secure and convenient than a PIN. In contrast, face unlock users had mixed experiences, and many had stopped using it. We conclude with design recommendations for biometric authentication on smartphones.
Chandrasekhar Bhagavatula, Blase Ur, Kevin Iacovino, Su Mon Kywe, Lorrie Faith Cranor, Marios Savvides
Password schemes based on selecting locations in an online map are an emerging topic in user authentication research. GeoPass is the most promising such scheme, as it provides satisfactory resilience against online guessing and showed high memorability (97%) for a single location-password. No multiple-password interference study, however, has been conducted to see if GeoPass or any other location-based password scheme is suitable for real-world deployment, where users have to remember multiple passwords. In this paper, we report the results of two separate multiple-password studies on GeoPass, each conducted over the span of three weeks. In the first study, we aim to understand the effects of interference on GeoPass scheme, where we found that users remembered location-passwords in less than 70% of login sessions, with 41.5% of login failures due to interference effects. Through a detailed analysis, we identify why interferences occur for location-passwords, and based on our findings, we propose to leverage mental stories to address the interference issue. We then perform a second interference study on modified GeoPass scheme to test the efficacy of our approach, where we found that the login success rate was greater than 97% and 3.4% of login attempts failed because of interference effects.
Mahdi Nasrullah Al-Ameen, Matthew Wright
The username-password pair is still a prevalent form of online authentication. However, attacks that are leveraging weak password habits are on the rise. The main response of the security community on the ground is to invest more in educating users. Such an approach leads to believe that the long held assumption stating that an ignorant user is the cause of an inadequate password behavior, still has many opponents. Although different research studies have presented other more likely reasons, practices are still perpetuating the same solution mindset of increasing end users’ education. The behavior of users has not improved dramatically over the last decade despite all these efforts. Therefore, this research work explores the hypothesis that knowledge of good password habits is a necessary but not by itself a satisfactory requirement for a safe password behavior. This will be achieved by studying the password habits of the same people advocating for more end user education. To investigate this hypothesis, we conducted a survey targeting an audience of IT professionals with good knowledge about security. The survey results show that cognitive knowledge of password security does not always materialize into practical and secure password practices. The anticipated results would be that confronting IT professionals with their own password practices which fail to adhere to what they preach to end users, will motivate them to let go of their long held assumptions that more education is the solution. This will further support the points made by other studies explaining the rationale behind the inadequate password habits of end users.
Ijlal Loutfi, Audun Josang
Completely Automated Public Turing tests to tell Computers and Humans Apart (captchas) are challenge-response tests used as a security mechanism on the web to distinguish human users from automated programs. While captchas are often necessary to stop abuse of resources, most existing schemes are intended for traditional desktop computing environments rather than for mobile device usage. In this paper we present a comparative user study of nine captcha schemes on smartphones to assess whether alternative input mechanisms help improve the usability of captchas in smartphones, and to evaluate the usability of modified schemes intended to be more suitable for smartphones. The results show that although participants find virtual keyboards on smartphones prone to errors they prefer them as input mechanism over other alternatives. We also found that the content of the challenge is highly relevant in users’ perceptions when it comes to captchas on smartphones. Based on our experiences, we offer a set of ten specific recommendations for the implementation of captchas on smartphones.
Gerardo Reynaga, Sonia Chiasson, Paul van Oorschot
In this short paper, we explore the advantages of using Participatory Design (PD) to improve security-related user interfaces. We describe a PD method that we applied to actively involve users in creating new SSL warning messages. Supported by a designer, participants tapped into their experiences with existing warnings and created improved dialogs in workshop sessions. The process resulted in a set of diverse new warnings, showing multiple directions that the design of this warning can take. Applying PD lets participants engage more with the subject matter and thus create nuanced designs. Overall, our exploration suggests that PD can provide a suitable, versatile, and simple set of methods that support the creation of design ideas for security-related user interfaces. Users are empowered to critically appraise and adapt security measures that they come into contact with in their everyday life on their own.
Susanne Weber, Marian Harbach, Matthew Smith
There is ongoing interest in utilising user experiences associated with security and privacy to better inform system design and development. However, there are few studies demonstrating how, together, security and usability design techniques can help in the design of secure systems; such studies provide practical examples and lessons learned that practitioners and researchers can use to inform best practice, and underpin future research. This paper describes a three-year study where security and usability techniques were used in a research and development project to develop webinos — a secure, cross-platform software environment for web applications. Because they value innovation over both security and usability, research and development projects are a particularly difficult context of study. We describe the difficulties faced in applying these security and usability techniques, the approaches taken to overcome them, and lessons that can be learned by others trying to build usability and security into software systems.
Shamal Faily, John Lyle, Ivan Flechais, Andrew Simpson
Current approaches to information security focused on deploying security mechanisms, creating policies and communicating those to employees. Little consideration was given to how policies and mechanisms affect trust relationships in an organization, and in turn security behavior. Our analysis of 208 in-depth interviews with employees in two large multinational organizations found two trust relationships: between the organization and its employees (organization-employee trust), and between employees (inter-employee trust). When security interferes with employees’ ability to complete work tasks, they rely on inter-employee trust to overcome those obstacles (e.g. sharing a password with a colleague who is locked out of a system and urgently needs access). Thus, non-compliance is a collaborative action, which develops inter-employee trust further, as employees now become “partners in crime”. The existence of these two relationships also presents employees with a clear dilemma: either try to comply with cumbersome security (and honor organization-employee trust) or help their colleagues by violating security (preserving inter-employee trust). We conclude that designers of security policies and mechanisms need to support both types of trust, and discuss how to leverage trust to achieve effective security protection. This can enhance organizational cooperation to tackle security challenges, provide motivation for employees to behave securely, while also reducing the need for expensive physical and technical security mechanisms.
Iacovos Kirlappos, Angela Sasse
User errors while performing security-critical tasks can lead to undesirable or even disastrous consequences. One major factor influencing mistakes and failures is complexity of such tasks, which has been studied extensively in prior research. Another important issue which hardly received any attention is the impact of both accidental and intended distractions on users performing security-critical tasks. In particular, it is unclear whether, and to what extent, unexpected sensory cues (e.g., auditory or visual) can influence user behavior and/or trigger mistakes. Better understanding of the effects of intended distractions will help clarify their role in adversarial models. As part of the research effort described in this paper, we administered a range of naturally occurring – yet unexpected – sounds while study participants attempted to perform a security-critical task. We found that, although these auditory cues lowered participants’ failure rates, they had no discernible effect on their task completion times. To this end, we overview some relevant literature that explains these somewhat counter-intuitive findings. Conducting a thorough and meaningful study on user errors requires a large number of participants, since errors are typically infrequent and should not be instigated more than once per subject. To reduce the effort of running numerous subjects, we developed a novel experimental setup that was fully automated and unattended. We discuss our experience with this setup and highlight the pros and cons of generalizing its usage.
Tyler Kaczmarek, Alfred Kobsa, Robert Sy, Gene Tsudik
Bitcoin users are directly or indirectly forced to deal with public key cryptography, which has a number of security and usability challenges that differ from the password-based authentication underlying most online banking services. Users must ensure that keys are simultaneously accessible, resistant to digital theft and resilient to loss. In this paper, we contribute an evaluation framework for comparing Bitcoin key management approaches, and conduct a broad usability evaluation of six representative Bitcoin clients. We find that Bitcoin shares many of the fundamental challenges of key management known from other domains, but that Bitcoin may present a unique opportunity to rethink key management for end users.
Shayan Eskandari, David Barrera, Elizabeth Stobert, Jeremy Clark
We review empirical studies that evaluate the resilience of various PIN entry methods against human shoulder surfers. Conducting such studies is challenging because adversaries are not available for study and must be simulated in one way or another. We were interested to find out whether there is a common standard how these experiments are designed and reported. In the course of our research we noticed that subtle design decisions might have a crucial effect on the validity and the interpretation of the outcomes. Getting these details right is particularly important if the number of participants or trials is relatively low. One example is the decision to let simulated adversaries enter their guesses using the method under study. If the method produces input errors then correct guesses may not be counted as such, which leads to an underestimation of risk. We noticed several issues of this kind and distilled a set of recommendations that we believe should be followed to assure that studies of this kind are comparable and that their results can be interpreted well.
Oliver Wiese, Volker Roth
Android mobile users are provided with a permissions list before installing an app that displays the list of resources available to that app. Users can review the permissions list and decide to install the app if they trust the app with their information. However, this information is accessible not only to the app provider but may also be available to third party ad libraries included in the app, which users are unaware of. In this paper, we propose a novel icon-based privacy threat representation as an alternative to permissions list that shows privacy threats to users from both app providers and associated ad libraries. Our approach considers users’ privacy in terms of three granules: location, identity and query. Our proposed interface aims to educate users about which particular app providers and third parties have access to their privacy granules. We obtained user feedback on our technique in two user surveys (n = 137; 294), one each for testing the icons and the icon-based privacy threat display. We present our findings for ease of use and effectiveness of the novel privacy threat interface and further evaluate its impact on users’ installation decision.
Anand Paturi, Patrick Kelley, Subhasish Mazumdar
Phishing is a prevalent issue of today’s Internet. Previous approaches to counter phishing do not draw on a crucial factor to combat the threat – the users themselves. We believe user education about the dangers of the Internet is a further key strategy to combat phishing. For this reason, we developed an Android app, a game called –NoPhish–, which educates the user in the detection of phishing URLs. It is crucial to evaluate NoPhish with respect to its effectiveness and the users’ knowledge retention. Therefore, we conducted a lab study as well as a retention study (five months later). The outcomes of the studies show that NoPhish helps users make better decisions with regard to the legitimacy of URLs immediately after playing NoPhish as well as after some time has passed. The focus of this paper is on the description and the evaluation of both studies. This includes findings regarding those types of URLs that are most difficult to decide on as well as ideas to further improve NoPhish.
Gamze Canova, Melanie Volkamer, Clemens Bergmann, Benjamin Reinheimer
We describe a common but poorly known type of fraud – so-called liar buyer fraud – and explain why traditional anti-fraud technology has failed to curb this problem. We then introduce a counter-intuitive technique based on user interface modification to address liar-buyer fraud, and report results of experiments supporting that our technique has the potential of dramatically reducing fraud losses. We used a combination of role playing and questionnaires to determine the behavior and opinions of about 1700 subjects, and found that our proposed technique results in a statistically significant reduction of fraud rates for both men and women in an experimental setting. Our approach has not yet been tested on real e-commerce traffic, but appears sufficiently promising to do that. Our findings also support that men are more willing to lie and defraud than women are; but maybe more interestingly, our analysis shows that the technique we introduce make men as honest as women.
Markus Jakobsson, Hossein Siadati, Mayank Dhiman
Many aspects of information security combine technical and human factors. If a highly secure system is unusable, users will try to circumvent the system or migrate entirely to less secure but more usable systems. Problems with usability are a major contributor to many recent high-profile security failures. The research domain of usable security and privacy addresses these issues. However, the main focus of researchers in this field has been on the “non-expert” end user. After placing this issue in context of current research, the presenter will argue that greater attention needs to be paid to the human aspects of system security and the administrators and developers involved in it. The talk will use TLS as an example to illustrate usable security and privacy issues across all levels and for all actors involved in the system.