People of ACM - Jean Camp

March 10, 2022

How would you describe your specific area of focus within the broader information security field?
My focus is on individual autonomy and safety. To consider security as autonomy and safety requires considering diverse populations, edge conditions, usability, and the policy context. As a result, my technical contributions are influenced by my awareness of the policy context, and my policy contributions are informed by the human and technical context.

Security allocates risk. Security and privacy technologies can support human autonomy or can enforce control and vulnerability. The worst case is when people lose autonomy and experience risks they cannot avoid ─ resulting in harms they cannot mitigate.

After earning a degree in electrical engineering, your first job was at the Catawba Nuclear Station in South Carolina. What was your role at Catawba and how did that first job influence your career path?
The experience provided a lived experience on what it means to be an engineer who centers safety – of myself, of my coworkers, and of the community. It is not enough to practice due care. You also must document this and have people engaged as regulatory authorities with the expertise to track, evaluate, and translate that safety. There is nothing like an unplanned outage of 3.5 megawatt generating faculty that is the result of a single blown fuse to provide a lasting impact on systems thinking!

One of your core projects currently is risk communication, which uses mental models to inform security. What are the key goals of this work?
Supporting human decision-making means communicating risks and the implications of their choices in a manner that aligns with their understanding. Some of the work that I do which is not characterized as ‘mental models’ still aligns with this. For example, we have an open-source anti-phishing toolbar that results in extremely high levels of phishing detection accuracy without additional education or training. Yet it breaks the advertising model of the web, so it is not aligned with the current web environment. Mental models can be applied to every user, not just the less technical ones. For example, we developed a system to enable developers to more easily generate the access control files required to be compliant with the Internet Engineering Task Force (IETF) Manufacture’s Usage Description (MUD) (which describes the expected behavior in order to prevent recruitment of IoT devices into botnets). This helper application, called the MUD Vizualizer, was targeted at security and network engineers. We found that network engineers could be as accurate as those with security expertise with the Vizualizer, so that network systems developers need not also be security engineers for the description to be accurate. Developers are people, too.

In your recent work, “Criteria and Analysis for Human-Centered Browser Fingerprinting Countermeasures”, you and your co-authors, Vafa Andalibi, Erfan Sadeqi Azer, published a suite of algorithms for people to protect themselves with different models of privacy. What considerations should computer scientists and policy makers keep in mind when thinking about privacy?
In that work, we argued that different requirements for privacy require different technical implementations. A need to avoid tracking without being identified as someone seeking privacy is a reasonable requirement; and right now, fingerprint-defeating technologies often effectively announce their use to the server. In that case you want a believable fingerprint with the largest anonymity set. However, if you do not mind the server knowing you reject being tracked or care about the anonymity set, then this equally reasonable requirement must coincide with a different implementation.

If you apply this idea to COVID, for example, having one tracing app is going to be less optimal for the varying privacy requirements that having different apps with different options. If you look at our work, “Privacy in Crisis: Participants' Privacy Preferences for Health and Marketing Data during a Pandemic,” we explored risk perceptions about data use during the COVID pandemic. When balancing beneficial public health initiatives against the importance of digital privacy, it is critical to allow people with different threat models to engage without forcing a single definition of privacy.

What is another policy issue related to information security that will take center stage in the near future?
Clearly, we will continue to have to struggle for human autonomy and individual empowerment. When key escrow was first proposed, in a model that combined software and hardware controls, there was extensive discussion about computers with cryptography that could be built broken. Now we have seen the need for policy action on Right to Repair, and at the same time, abusive threats and prosecutions under the Digital Millennium Copyright ACT (DMCA). Please note that the reason that security researchers are able to resist intimidation via the DMCA is because professional societies are committed to obtain an exemption to the DMCA for computer security research. The ACM TPC was first among equals in this effort!

Just as with physical risks, the right to make unwise decisions is something we will continue to struggle with. Everything from fireworks regulations to regulation of foods made at home vary across the states. Yet these are inherently local decisions with local impacts. There is a continuing struggle to understand how to balance this across a global network.

Jean Camp is a Professor at Indiana University, where she directs the Center for Security and Privacy in Informatics, Computing and Engineering. Her research focuses on the intersection of human and technical trust, leveraging economic models and human-centered design to create safe, secure systems. Camp’s books include “Trust and Risk in Internet Commerce,” and “Economics of Information Security.”

She is a member of the ACM US Technology Policy Committee (USTPC). Camp was recently named an ACM Fellow for contributions to computer security and e-crime measures.