Experts address often overlooked aspects of cloud protection and advise companies looking to improve their security posture.
2021 RSA CONFERENCE – For both companies and attackers who threaten them, enterprise cloud adoption offers a slew of advantages, threats, obstacles, and opportunities. Also, seasoned users of cloud technology and services will benefit from some additional security training.
Given the year that preceded this year's all-virtual RSA Conference, when companies became increasingly reliant on cloud services and struggled to protect completely remote teams, it's no wonder that cloud protection was a hot topic during the COVID-19 pandemic. Speakers looked at the holes that are often ignored and provided practical advice about how to minimize risks.
According to Matthew Chiodi, Palo Alto Networks ' chief security officer for public cloud, identity and access control (IAM) in the cloud are one of these blind spots. A generic cloud account may have two roles and six assigned policies, but determining what someone can and can't do is usually much more complicated and difficult.
"It's normally hundreds of positions and maybe even thousands of policies," Chiodi said of most development accounts he's seen. "Understanding what we call net efficient permissions becomes extremely difficult." When businesses use more cloud accounts, the problem becomes even worse.
Palo Alto Network gathered "a huge, massive data collection" of publicly accessible Github data to better understand how widespread the problem is: 283,751 files and 145,623 repos, from which they were able to derive 68,361 position names and 32,987 possible cloud accounts. Researchers used a combination of the 500 most popular position names and verified cloud account lists to find potential misconfigurations.
According to what they discovered, these misconfigurations may have given them access to thousands of EC2 snapshots, hundreds of S3 bins, and a plethora of KMS keys and RDS snapshots.
"It's almost certainly much worse than a compromised cloud host when you have a compromised cloud account due to one of these forms of misconfigurations," Chiodi said.
An attacker who can compromise a single host can exploit a bug and gain access to data, but if network segmentation is used, their options are restricted. Patches and multi-factor authentication wouldn't help in this case because "[an attacker] can weave through all of that stuff when you have an identity-based misconfiguration at the CSP level," according to the researchers.
The Risks of Infrastructure-as-Code
According to Chiodi, Infrastructure-as-code (IaC), a method of managing and provisioning infrastructure using code rather than manual processes, is "very blossoming" for most businesses. Although this approach has advantages for security teams, it also has drawbacks.
Palo Alto Networks combed through nearly a million IaC models on Github. They discovered that 42% of AWS CloudFormation template users have at least one vulnerable configuration and that over 75% of cloud workloads expose SSH. Cloud storage logging is turned off in 60% of cases. Encryption at the database layer is fully disabled in 43% of organizations configuring cloud-native databases via IaC.
"We discovered that when companies use infrastructure-as-code to establish external and even internal security boundaries, they expose sensitive ports like SSH directly to the Internet 76 percent of the time," he said.
The numbers were lower for Terraform, allowing companies to use multi-cloud IaC models across all major cloud service providers, but "consistent inconsistencies" remained. More than 20% of all Terraform configuration files had at least one unstable configuration; access logging for S3 buckets was disabled in 67%, and object versioning was disabled in more than half.
More companies have a "converged perimeter," which he defines as environments with assets both behind an Internet gateway and in the cloud, such as DevOps and ITOps. In these settings, there are many intruder methods, strategies, and procedures (TTPs) to be aware of.
The development of temporary or permanent keys is one example. "For example, we've seen cases where developers had root keys on an AWS environment, which is pretty bad," Soto explained. "You should never offer root access; instead, you should uphold the principle of least privilege and division of duties... You can do whatever you want and take control once you have a root key "Added he.
Spilling Secrets in the Cloud
While most security professionals are aware that accidental data disclosure is a shared cloud security problem, many are unaware when it occurs. Jose Hernandez, principal security researcher, and Rod Soto, chief security research engineer, both of Splunk, talked about how corporate secrets are revealed on public repositories.
Credentials are everywhere in today's environments: SSH key pairs, Slack tokens, IAM secrets, SAML tokens, API keys for AWS, GCP, and Azure, and so on. When credentials aren't correctly secured and left exposed, most commonly in a public repository – Bitbucket, Gitlabs, Github, Amazon S3, and Open DB are the most popular public repositories for apps – it's a shared risk scenario.
"If you're an intruder looking for someone who has embedded credentials that can be reused, either through omission or negligence, these will be your sources of leaked credentials," Soto said, adding that these can help attackers pivot between endpoints and the cloud.
According to Splunk, Github has 276,165 businesses with secrets that have been leaked. GCP service account tokens were the most commonly spread, appearing in 34% of instances, followed by "password in URL" (30%) and AWS API keys (20%). (12.7 percent ). Hernandez said that after discovering stolen secrets, it took an average of 52 days for the secret to be deleted from the Github project.
Attack Detection & Defensive Strategies
It's no secret that detecting malicious activity in the cloud is more complex, the fact that Alfie Champion, a cybersecurity expert with F-Secure Consulting, explained in an RSAC talk on attack detection. Fewer cloud behaviors are undesirable.
"When it comes to cloud detection, context is becoming increasingly important," Champion said. "With so much API activity going on, knowing an event, the meaning behind it, and the context in which it occurs can be critical to developing high-fidelity detections."
One of the most common mistakes companies make is aggregating telemetry without context when it comes to cloud migration. When an analyst performs an investigation, there's no way of knowing which account a log belongs to, and there's no way for them to pivot into that account to understand what's going on. "What is bad in one account can be good in another," he said, adding that "you need that background to find that out."
Many people overlook authentication logs and management interfaces, which link on-premise and cloud environments. Larger companies are more likely to handle several cloud accounts in a federated manner, he said. These logs would enable them to make explicit connections between the activities they observe.
It's worth remembering that each primary cloud provider has its approach to logging and threat detection, so administrators will need to take extra precautions to ensure they're getting the information they need. In his RSAC talk, Brandon Evans, a senior security engineer with Zoom Video Communications, noted that flow logging shows where traffic comes from, where it goes, and how much data is transferred, which can imply potentially malicious activity but isn't allowed.
"Stream logging is not allowed by default on any of the big three cloud providers," he said, adding that customers must expressly opt-in and specify a log retention policy. According to him, there are variations in the time it takes for AWS, Azure, and GCP to activate logs and receive them and overall log retention times, command line support, and logging of blocked ingress traffic.
According to Evans, businesses should ensure that they are collecting cloud API and network flow logs for each cloud provider they use. In the long run, they can work with engineering to harden permissions and apply the concept of least privilege when they discover flaws in cloud technology and configuration.
It's beneficial for companies to create a "cloud detection stack," which can assist in ingesting the appropriate logs and appropriately presenting them. In his conversation with Champion, Nick Jones, a senior security consultant with F-Secure, noted that while the industry likes to talk about a "single pane of glass" for this activity, he believes it is "useful, but maybe not important."
"The real issue here is that attacks rarely occur in isolation in a single environment," he said. "An intruder would most likely attempt to pivot or shift laterally from your on-premises estate into the cloud, or vice versa, or between two environments."
As a result, analysts would need to look at logs from one data source and pivot into the next, he added. Jones proposed prioritizing Control Plane audit logs such as CloudTrail and Audit Log for visibility of all administrative activities, even though there are many data sources to deal with. Storage access logs, function executions, and KMS key access are all critical service-specific logs because they display access and use of specific resources and facilities.
According to Champion, it's never too early to threat model and test offensive scenarios. What would an intruder do if they were to steal one of your assets? How can you get around your security measures? He suggested defining the company's sensitive data, thinking about the attacker's goals and starting points, and then prioritizing the attack route. What do you think their end target is?