Pages

6 May 2023

Safer Together Inclusive Cybersecurity

Camille Stewart Gloster, Samantha Ravich
Source Link

“To err is human,” Alexander Pope, poet of the Enlightenment, said in his 1711 treatise An Essay on Criticism. Pope understood that error is the basis of human nature, and civilization would be stronger if it recognized and mitigated such fallibility rather than dismissing it. The world of cybersecurity would be wise to relearn Pope’s observation.

In today’s world, a vast, globally connected, digital platform is the foundation of an individual’s ability to participate in society and prosper. From ensuring one’s shelter, food, utilities, communications, education, medical care, and income to casting a vote, driving a car, or receiving government benefits, internet-enabled technology is indispensable to modern life. At the same time, this digital network is extraordinarily vulnerable to the actions of any one person. This is not hyperbole or a theoretical risk. It is a reality that shows itself every day.

In 2014, an IBM Security Services report concluded that human error was a contributing factor in 95 percent of all cyber incidents.1 Human error today can have serious national economic and security consequences.

In the spring of 2020, a large-scale spear-phishing campaign tricked numerous employees of U.S. defense and aerospace contractors into opening emails disguised as job offers. Malware infected their devices, and North Korean hackers exfiltrated defense technology.2 A single person could have allowed a hostile foreign government to gain critical intelligence on the F-22 fighter jet program. The targets of this cyberattack were experienced technicians and engineers, well versed on operational security and aware they could be under foreign surveillance. And even they erred.

Cyberattacks are growing in volume and intensity. Governments and companies are spending billions of dollars to create and deploy new layers of cybersecurity technology. But not enough attention or resources are deployed to understanding the human element in this equation. Employees will yearn for better jobs and continue opening emails that suggest such an opportunity is in the offing. People, tired of the dozens of passwords they are forced to create, will continue to select passwords they can remember.

Conventional wisdom states the user is the weakest link in cybersecurity. But perhaps this is incorrect. Perhaps the problem lies with the security community failing users by not accounting for the people at the center of this work. The security community may be assuming an ideal, hyper-informed user or at least modeling a homogenous user population.3 This paper posits that the lack of life experience diversity in the developer and technical communities results in a paucity of consideration of the users — and how they interact with technology — when building cybersecurity protocols.4 Developers may be creating technology by them for them while users who click on the email that contains malware or select and reuse weak passwords are blamed for their ignorance or laziness. This equation must change.

The reality is that cybersecurity best practices are not meeting the population where they are — where and how they live, how they understand and interact with technology in their lives and lifestyles, and even how they participate in their communities. The security technology community promotes multifactor authentication as the solution to many cybersecurity ills, for example, but fails to recognize that in poorer communities, users may not be able to afford multiple devices or may be sharing a single cell phone with other friends or family members. A better understanding of the user community and how different users interact with technology, their risk tolerance, perception of the threat, and level of trust/distrust in, and historical interaction with, various institutions can help drive a cultural change towards more positive cybersecurity behaviors and reduce national cyber risk.

What follows is a bold challenge to the cybersecurity community — “inclusive cybersecurity” must become the norm if our society is going to give itself a fighting chance to protect our networks going forward. The paper begins with an exploration of the relationship between cultural identity and cybersecurity. And while Camille Stewart Gloster brings to bear both qualitative and quantitative research to aid in this exploration, she acknowledges this barely scratches the surface of what needs to be studied. Still, the findings are robust enough to start building a framework for more inclusive cybersecurity. To repurpose Benjamin Franklin’s comment at the time of the signing of the Declaration of Independence for today’s need for cybersecurity, “We must all hang together, or, most assuredly, we shall all hang separately.”
Executive Summary

As the frequency and severity of cyberattacks grow, countermeasures have proliferated to protect against digital extortion, disinformation, fraud, and other malign activities. These countermeasures are critical for protecting everything from national security systems to consumer devices. The technology deployed to counter these threats is often rooted in a strictly technical understanding of the challenge.

Ignoring the individuals that use the systems, or assuming all individuals interact with security measures in the same way, creates new vulnerabilities. The individual is not “the weakest link in security.” Rather, the systems we have created have failed to account for a range of human behavior. Numerous studies have confirmed that cultural identity and cultural norms can have an influence on a given individual’s risk posture. This includes the level of trust in authoritative experts and the willingness to adopt new security protocols. Gender, ethnicity, age, education level, and economic status may all impact how and to what extent an individual adheres to cybersecurity best practices.

This paper posits that a user-centric, inclusive focus can significantly enhance cybersecurity. If cyberspace comprises technology, people, and processes, then cybersecurity and risk mitigation must consider the sum of those elements. Owing to cultural nuances in the use of technology, it is essential to understand these differences to build effective user-centric security programs that include a holistic view of the threat and opportunity landscape. Accounting for the economic, ethnic, and cultural backgrounds of individuals and communities is necessary to properly combat cyber threats.

This paper probes several questions surrounding the relationship between cultural identity and cybersecurity, drawing upon two original, small-scale studies conducted by the author. The first is a survey of participants roughly representative of the U.S. population, which aims to assess trends in cybersecurity behavior based on identity attributes. This study asked participants a series of demographic questions and ran regression analysis to identify trends relating to cybersecurity. It found differences in cybersecurity practices based on age, sex, and education level, with some limited differences based on race.

The second study collected anecdotal information about participants’ cyber behaviors and explored how they were influenced by their cultural identities and experiences. This diary study identified indications of connections between cybersecurity behavior, risk tolerance, and cultural identities.

With these insights, this paper proposes the notion that user-centric solutions that allow for a socio-technical view of the threat landscape could improve cybersecurity. The paper explores five principles upon which to build an inclusive cybersecurity framework. These principles focus on the perils of groupthink and the necessity of diversity and inclusion, individual-centrism, and clear communication. The paper concludes with examples of how applying these principles could provide a better outcome than traditional, technology-centric solutions.

No comments:

Post a Comment