Mobile Privacy

The services offered by mobile apps are useful but these apps can also be privacy invasive, meaning that in they compile and share more information than is needed for the task the app performs. For example, researchers at University of California Berkeley analyzed 940 apps and found that one third of them were over privileged, meaning that these apps requested permissions for resources that were far beyond what was required for the functionality of the app. Such over-permissioning creates risk to both user security and privacy. These risks exist even in apps for the most vulnerable users, such as those that are designed for children. Currently, Android and iOS privacy ecosystems are grounded in permissions which control access to sensitive resources. The systems combines permission manifests at the time of app selection, resource access warnings at first use, and per-resource controls. As yet the controls provided in the form of permissions have proven insufficient to address privacy concerns and prevent selection of exploitive apps.

Combining insights from previous work in mobile privacy and risk communication to design an alternative mechanism to provide actionable and usable information to support privacy aware decision making. Researchers Shakthidhar Gopavaram, Omkar Bhide and L. Jean Camp from the School of Informatics, Computing, and Engineering at Indiana University have proposed a new intervention which augments the current permissions model with simple icons and sounds. Building on previous work by Prashanth Rajivan et. al., they designed the icons to provide beneficial framing for protecting privacy. They also added aural cues to provide feedback as a form of priming. Specifically, they used audio snippets of people cheering and booing. The cheers were played when a participant selected an app with positive rating and the boos were played when they select an app with a low privacy rating.

To measure the change in behavior caused by the proposed intervention, they conducted an experiment where they presented participants with two categories of apps with eight apps in each category and asked them to install four apps from each category. The participants were divided into four groups: Control Group, Lock Group, Sound Group and Lock & Sound Group. The participants in the Control Group were provided with a version of the interactive Android Play Store simulator that was an exact simulation of Google’s Play Store. In other words, the participants in the Control Group were not provided with the aggregate privacy ratings, but they had access to the permissions manifest. The participants in the Lock Group were provided with the aggregate privacy rating for each app using the visual indicator shown in Figure 1. The participants in the Sound Group heard cheers/jeers based on the app’s privacy rating but were not provided with a visual representation of the aggregate privacy rating. Finally, the participants in the Lock & Sound groups were provided with both visual and audio feedback.

The results from the experiment showed that when participants were presented with both visual indicators (positively framed padlocks) and aural cues (cheers and jeers) they made privacy based app choices; i.e. individuals chose apps with a higher privacy rating over apps with a higher app rating. This was a significant change in behavior when compared to the Control Group, where participants made app decisions primarily based on app rating. Hence, the inclusion of aggregate ratings and multimedia priming for privacy offers promise for supporting more informed decision-making in online app stores. For more information visit the technical report at IU.

Visual Cue
Figure 1 - Visual cues used to communicate privacy risk

Audio Cues
Figure 2 - Illustration of audio feedback

Figure 3 - Screenshot of the PlayStore simulator agumented with privacy rating