By: George Kamis
Recently, hackers stole 614GB of highly sensitive data related to a U.S. Navy project called “Sea Dragon.” The data was stored on a naval contractor’s unclassified network. The incident is still under investigation, although it is reasonable to assume that somewhere, someone’s credentials were compromised to initiate the attack. The Sea Dragon incident is only one example of external hackers taking advantage of employees’ information. Indeed, according to the 2017 Verizon Data Breach Investigations Report, 81 percent of data breaches were caused by the hijacking of user credentials by hackers to gain access to internal systems and data. It is a troubling trend that organizations, including government agencies and contractors, have struggled to defend against as they continually seek a balance between better security and giving employees the freedom necessary to accomplish their missions.
Risk adaptive > “all or nothing”
Defending against these types of attacks usually involves an “all or nothing” approach where all users are considered risks and subject to the same security measures across the board. But a user whose account information may have been stolen by a bad actor is far different than someone innocently sitting in their cubicle diligently analyzing the day’s reports. That all or nothing approach can unfairly penalize workers who are simply trying to do their jobs. Shutting these workers out of the network can force them to seek workarounds. It also greatly stifles productivity and causes workplace frustration. This can also create friction between IT and an agency’s workforce and create additional security alerts for security teams to manage.
A risk-adaptive approach is a better way forward in managing defense security, especially when considering compromised credentials. It combines data and people to establish specific, human-centered, automated and adaptive security responses based on how individual users interact with data and systems.
How a risk-adaptive approach works
A risk-adaptive approach to security examines users’ interactions with data and systems. Analytic engines such as UEBA identify when, why and how employees access data and examine that information in the context of employees’ “normal” behavior. This allows them to discern between activities that pose an actual threat versus those that are innocuous.
Suppose a defense contractor supporting the Navy is creating underwater topographical maps. She regularly logs into three different systems, and accesses cloud-based data sets about changes in submarine technology. Over time, these patterns set a baseline “normal” understanding of this employee’s behavior in the system.
Suddenly this employee’s behavior changes. Her credentials are used during off hours and from a remote location overseas. She is attempting to access information on projects and programs outside of her activities from a sensitive server that she normally doesn’t access. These actions may signal a real-time anomaly in the contractor’s behavior, indicating that her credentials are no longer in her control and that the data in the system is at risk. The system can automatically deny access to this user without impacting the rest of the team. By automatically denying access, the employee, the defense contractor, and the program are immediately protected from data compromise.
Scoring the risk
The focus on the individual, rather than the group, is important. That is why risk-adaptive monitoring involves assigning “risk scores” to each user based on their proximity to data.
For instance, a project lead might receive a higher score than an administrative aid. This does not mean that the project lead is a security threat. It simply indicates that the person’s access to valuable data makes them a high-value target for attackers looking for a way in the door.
Risk scores can rise or fall depending on a number of factors. A user’s role and responsibility may change, necessitating a risk score adjustment. Or, they may start to exhibit questionable behavioral patterns, such as our topographical contractor, in which case the risk score may increase. The point is that everyone is treated uniquely to keep a better eye on where threats are more likely to manifest.
Training machines and humans
Once baselines and risk scores have been established, risk-adaptive systems automate responses to security alerts, from the mundane to the serious. The system prioritizes alerts based on individuals’ risk scores, along with the contextual behavior that contributes to those scores. This filters the alerts and allows security teams to understand which ones require action beyond the automated responses that have already been proportionately applied.
In a risk-adaptive approach, data and people work in tandem to provide ongoing security in an ever-changing defensive landscape. Legacy systems no longer provide the type of security needed to protect valuable assets and resources. Focusing on user behavioral patterns through a risk-adaptive approach results in more effective security and productivity — without creating unnecessary friction between IT and an agency’s workforce.
George Kamis is chief technology officer for global governments and critical infrastructure at information security company Forcepoint. He works closely with information assurance industry leaders, government executives and the Forcepoint executive management team to help guide their long-term technology strategy and keep it aligned with federal and industry requirements.
No comments:
Post a Comment