Your Cloud Is Probably Misconfigured (And That's a Bigger Deal Than You Think)

Most cloud breaches aren't caused by elite hackers - they're caused by a checkbox someone forgot to tick. Here's what goes wrong and how to fix it.

4 min read

Here’s a stat that should make you uncomfortable: according to IBM’s X-Force Threat Intelligence Index, roughly 23% of cloud security incidents stem from misconfigurations. Not zero-days. Not nation-state APTs. Just… settings that were left wrong.

And Gartner’s been saying for years that through 2025, 99% of cloud security failures would be the customer’s fault. Not AWS’s. Not Azure’s. Yours.

The shared responsibility thing nobody reads

Every major cloud provider operates on a shared responsibility model. AWS secures the infrastructure - the physical data centers, the hypervisors, the network backbone. You secure everything you put on that infrastructure - your data, your IAM policies, your security groups, your encryption settings.

The problem? Most orgs assume “we’re on AWS” means “AWS handles security.” They don’t. Capital One learned this the hard way in 2019 when a misconfigured WAF led to the exposure of 106 million customer records. AWS wasn’t at fault - the breach happened entirely within Capital One’s responsibility zone. An over-provisioned IAM role, a WAF vulnerable to SSRF, and weak detection. $80M in fines and a $190M class-action settlement later, the lesson was clear.

That was 2019. You’d think we’d have learned by now.

The usual suspects

Datadog’s 2025 State of Cloud Security report found that 59% of AWS IAM users have active access keys older than a year. Half of those keys haven’t even been used in 90+ days. Long-lived creds are the single most common documented cause of public cloud breaches.

Here’s what we see over and over again:

Public S3 buckets. AWS made Block Public Access the default for new buckets in 2023, but legacy buckets are still out there, wide open. Trend Micro flagged this as the #1 misconfigured S3 rule with a severity rating of “very high.”

Overly permissive IAM. Policies with Action: * and Resource: * because someone needed quick access six months ago and never scoped it down. One compromised credential with admin-level access and you’ve handed the keys to your entire environment.

Wide-open security groups. Inbound 0.0.0.0/0 on port 22 or 3389. That’s SSH or RDP open to the entire internet. We still find this regularly during assessments.

No encryption at rest. Unencrypted EBS volumes, RDS instances, S3 objects. If a snapshot leaks or a backup gets compromised, that data is plaintext.

This isn’t theoretical. In 2023 alone, Toyota exposed 260,000 customer records through a misconfigured cloud environment. Microsoft’s AI research team accidentally leaked 38TB of internal data via a bad Azure SAS token. These aren’t small shops - they have massive security teams and still got tripped up.

What you can actually do about it

The good news: most of this is fixable, and you don’t need a massive budget.

Start with visibility. You can’t fix what you can’t see. Tools like Prowler (open-source, supports AWS/Azure/GCP/K8s) will scan your environment against CIS benchmarks, PCI-DSS, HIPAA, SOC 2, and more in about 15 minutes. ScoutSuite is another solid open-source option for multi-cloud audits.

Kill long-lived credentials. Switch to IAM roles with temporary creds wherever possible. If you absolutely need access keys, rotate them regularly and monitor for unused ones. AWS IAM Access Analyzer can help identify over-provisioned policies and external resource sharing.

Lock down at the account level. Enable S3 Block Public Access at the account level, not just per-bucket. Use SCPs through AWS Organizations as guardrails so even well-meaning devs can’t accidentally open things up.

Shift left. If you’re using Terraform or CloudFormation, run policy-as-code tools like Checkov or tfsec in your CI/CD pipeline. Catch the misconfiguration before it ever reaches production.

Use what your provider gives you. AWS Config, Security Hub, Azure Defender, GCP Security Command Center - these aren’t perfect, but they’re built in and they’re a solid starting point.

The real issue

Cloud misconfigurations aren’t a technology problem. They’re an ops problem. Teams move fast, skip reviews, copy-paste policies from Stack Overflow, and forget to circle back. The average cloud account has 43 misconfigurations according to recent industry data. Multiply that across a multi-account setup and you’ve got a minefield.

The fix isn’t buying another tool - it’s building the habit of reviewing configs, auditing access, and treating your cloud environment with the same rigor you’d give a physical data center.

If you’re not sure where your blind spots are, that’s exactly what we help with. Let’s take a look together.

Ready to secure your
digital future?

Let's discuss how Libre Labs can help protect and transform your business.