fbpx
Securing the Cloud

Part Three – Fixing the gap in the Shared Responsibility Model

By August 9, 2020 August 31st, 2020 No Comments

This is the third in our Ionburst blog series overviewing the challenges and benefits of Cloud object storage. It considers data privacy and security concerns, highlights examples of Cloud object storage leaks, and describes how a new paradigm of emerging solutions such as Ionburst innovatively apply security, resiliency and privacy technologies to alleviate these Cloud-specific challenges, giving organisations comprehensive control of their Cloud storage security and recovery.

This short series covers:

  1. Setting the scene: Data privacy implications for Cloud object storage and the Shared Responsibility Model;
  2. The problem of how Cloud object storage leaks happen: A look at data compromise in the Cloud and examples of how, why and the extreme impacts which can occur;
  3. A Cloud-era solution: How Ionburst uniquely plugs the gap in the Cloud data privacy model by enabling customised organisational control over data privacy and security.

In our second blog we introduced how Cloud storage typically works, using Amazon S3 as an example. We then reviewed recent examples of public Cloud data breaches, highlighting their scale and impact, and the data they exposed.

The symptoms and causes of the critical Cloud data storage problem

Let’s recap on the problem at hand. Latest research reports that 12.3 billion Cloud data records were stolen in 2019, a 4-fold increase on the previous year. More than two-thirds of these, some 8.5 billion records, were compromised or stolen due to internal error or misconfigured access policies.

Spoiler alert – The problem is not the underlying vulnerability of Cloud platforms.

This problem exists because of a fundamental lack of awareness around customer responsibility resulting from the Shared Responsibility Model. Perhaps more onerously, it highlights a skills gap in managing customer data stored in the Cloud.

This gap may be due to a lack of appreciation that historic, outsourced organizational IT capabilities aren’t the same as Cloud-migrated IT. Data Privacy regulations wouldn’t have it any other way in any case. You can move who does the work, but not who owns and is responsible for the data.

Our second blog highlighted examples of breaches that exposed this gap for organisations using AWS’s S3 storage platform. That’s not to say AWS S3 is vulnerable, but because it’s the global leader, it’s the most targeted.

For its part, AWS has developed features for customers to manage public access settings for S3 buckets, so there are processes to help control access to S3 buckets and to prevent data breaches caused by security misconfigurations.

However, security researchers such as UpGuard report breaches continuing to escalate. IBM and Ponemon Institute support this in their latest global data breach statistics, which now report average breaches cost $4m for Cloud-migrated organisations. Gartner research reports 90% of migrations to public Cloud inadvertently expose sensitive data, and 99% of failures are down to poor Cloud migration or misconfiguration.

The statistics are therefore overwhelming…

Shared responsibility Model (SRM) aside, many believe Cloud providers such as AWS should do more.

This is akin to the need for privacy and at the same a desire for Facebook – it’s a dichotomy that’s virtually impossible to reconcile.

AWS for example provides private S3 storage buckets by default. So how can hackers and researchers, both white-hat and black-hat, regularly expose open buckets?

Some security researchers, like UpGuard, argue that AWS, and likely others, have made it far too easy for S3 users to misconfigure buckets to make them open and publicly accessible over the Internet. They conclude AWS should design and architect better security around S3. Specifically, they focus on AWS’s authenticated user permissions.

The Cloud is not a direct image of the on-premise world, but access security is a critical area to get right nevertheless.

In practice, AWS’s access security is flexible enough to provide virtually any AWS user powerful rights depending on how a customer configures access. This has created confusion between user access and access control list (ACL) permissions and S3 bucket policies over data objects within an S3 bucket.

Treating all AWS users with a one-size fits all approach is challenging without highly skilled and capable information security professionals. Perhaps it’s misunderstood. Perhaps it’s underestimated. Perhaps people really do want security and open access at the same time and assume in the Cloud it’s someone else’s problem… Bottom line: Cloud customers are responsible for the security, privacy and resilience of their data, irrespective of where and how it’s stored, and by whom.

Regulations such as GDPR, HIPAA, HITECH, CCPA care not for how well data is secured, or even whether it’s encrypted, if data is breached. That’s why they demand breach notifications for all breaches of customer or personally identifiable information. That’s why they impose fines and public reprimands. The impact is simply not worth poor oversight or slopey shoulders when it comes to protecting customer or sensitive data.

It seems very strange that any organisation would enable public access to an S3 bucket to store ultra-sensitive data objects. Owners are assigned full control and have enormous flexibility, so should specify ACLs accordingly.

We don’t therefore agree that AWS or any other Cloud provider can do more. It isn’t their responsibility to protect an organization’s data under the SRM.

A cursory review of AWS documentation highlights the pitfalls of various actions. There are warnings in place such as:

“Warning - When you grant access to the Authenticated Users group any AWS authenticated user in the world can access your resource.” - https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html

Traditional file and other storage systems have historically enabled access to directories (or buckets in S3 parlance) and at a more granular level, to specific files (or objects) within directories. These are loosely equivalent to the ACL and bucket policies that are cause for concern.

In our second blog we covered ACLs and bucket policies. In particular, we reviewed their interoperability and need to understand them fully to get the most flexibility from them. For example, we looked at how ACLs and bucket policies interact and should (in our opinion) be used.

Our view is that AWS has provided useful flexibility to any organisation migrating to Cloud, where emphasis is more sensitive to data than infrastructure compared with in-house IT operations.

For its part, AWS does not allow public S3 buckets to be created by default. As data breaches have escalated, AWS has reviewed its safeguards and certainly enhanced its processes and documentation with visible warnings related to public access.

So, why are IBM and Ponemon reporting breaches at staggering levels?

It may be because of the “we want privacy, but we also want Facebook” syndrome. Strong security and complete freedom are massively difficult to balance. Let’s face it, the Cloud world still defies organisations. “Migration made simple” for Cloud doesn’t yet exist – if it does, we’re not heeding it according to the research about migration issues.

A key challenge of Cloud is how we can deliver data protection, privacy, resilience and recovery for object stores. In the on-premise world, these have established solution sets, but the Cloud world brings the need for a common solution at the object level given the volume of unstructured data being churned each second by the always on, work anywhere Cloud connected employees of organisations in particular. It’s not a challenge we’ve had to address before now. The world of remote working generates massive silos of unstructured data that adds to this. How do we protect it and keep it all private at the appropriate level when we don’t know what’s really critical from what’s today’s shopping list, and it may or may not be in an open bucket?

Fundamentally, we need a more cohesive and comprehensive way to manage how we protect and secure data objects within those stores and buckets.

The Cloud vendor “Shared Responsibility Model,” may as well be the Holy Bible. We all know it’s there but there are a myriad of ways people interpret it. Few of us really know how to interpret it.

Solutions start with recognising symptoms and identifying the causes

What is clear is that there is a gap in the SRM.

The world of data, its ownership, who gets access, its sovereignty, residency and regulatory dimensions are the domain of the organisations that store it – Cloud or elsewhere.

As long as there is a lack of understanding and due care in configuring Cloud stores such as S3 buckets, there will be breaches. We all want flexibility and functionality. When we get it, we need to recognise that, for the flexibility provided, we need to take greater care in how we configure, permission, manage, flag, persist and otherwise organise our Cloud data.

Irrespective of whether Cloud vendors such as AWS have a product design or architecture problem, research confirms hackers are running wild and data privacy regulators are issuing fines like never before. That’s before we get into Schrems II and data privacy shield confusion.

What’s needed is not Cloud provider tools such as AWS’ CloudTrail or S3 Inspector. We believe a more cohesive, baseline secure storage solution is required that eliminates the risk of Cloud breaches at the data object level and keeps data private and recoverable to legitimate actors.

That’s why we developed Ionburst and its patent applied and pending innovations in data privacy and sovereignty. By removing all customer context from data, rendering it completely anonymous and useless to hackers, we can close the gap in the SRM, without the need for considerable organisational investment in skills and infrastructure.

Add resilience and on demand recovery at the object level and we have the bones of the secure, resilient storage platform the Cloud needs to support society level home working in the post-COVID always on, work anywhere world.

 Solving the problem’s cause with Ionburst

Ionburst addresses two fundamental areas that we have covered in this blog series; firstly, it replaces the SRM gap and secondly, Ionburst fixes some of the specific issues we’ve highlighted so far in this blog series.

  1. Fixing the Shared Responsibility Model Gap

To fix the gap in the SRM, Ionburst acts as a bridge between the underlying Cloud provider(s) and customers storing data in the Cloud.

Cloud providers remain responsible for the underlying infrastructure, security and global availability of the Cloud services hosting Ionburst. As the new service provider, Ionburst also takes responsibility for the security, resilience and availability for the services providing the data storage. This effectively doubles up, but also enhances Cloud data protection and recovery.

Further, Ionburst assumes some responsibility from the customer, ensuring the privacy, protection, resilience and recovery of all customer data.

Now, customers using Ionburst to store and protect their Cloud data are responsible for only three areas.

  • First, integration – customers are responsible for ensuring their application or service integrates with Ionburst securely.
  • Second, access management – customers are responsible for their access credentials to the Ionburst API.
  • Finally, and most importantly, customers retain ownership of all data stored by Ionburst. Always.
  1. Fixing the Root Causes

When it comes to fundamental service access, while Ionburst’s API is ‘publicly’ available, we have designed Ionburst as a store for private/sensitive data.

It does not, therefore, have any concept of open, public or un-authenticated access by design.

Removing the ability for data to be exposed in this way, Ionburst negates a major security concern facing data stored in the Cloud today. For environments or organisations where there are strict requirements on network access or security, Ionburst can leverage existing Cloud technology to allow API access over internal Cloud provider networks, ensuring data is never transmitted externally.

When Ionburst stores data, it is logically separated by what we call a ‘party”’. A party is analogous to a bucket or container in today’s Cloud or object storage terms.

It’s the logical construct with which data is associated or stored ‘within’. Parties are inherently flexible, and can represent different environments, applications, or customers.

Most importantly, the Ionburst party model enforces segmented access to data – access credentials and data stored within a party cannot be shared with another party, ever.

Security is often thought of in terms of CIA – Confidentiality, Integrity and Availability. Existing solutions and technologies enable one or more of these concepts to varying degrees.

Ionburst goes further by combining them all in a single service. Ionburst’s greatest strength is its ability to be bespoke based on risk appetite and user choice, providing the level of security desired for each unique use case.

Data privacy is maintained for all data objects and their persisted fragments, both in transit and at rest, removing any residency limitations imposed by corporate governance or regulators.

Ionburst is designed for more than just filling the gap in the SRM.  Ionburst exists to help the transition to a post Cloud EDGE/IoT world where data and code will need to be readily available by their respective owners.

That future has been somewhat brought forward by the COVID-19 epidemic affecting us all. It’s essential that data moves, and the compute effort moves with it in a manner that renders the Cloud a massively distributed network of compute, storage, memory, code and secure components.

The need for both architecture and design to accommodate non-technology paradigms such as data privacy, data residency and data sovereignty is vital.

The very definition of data is changing.

Metadata is becoming a security principle rather than a classification. The logical link between data and owners needs to advance if data security in Cloud and EDGE/IoT domains in a 5G world is to succeed in keeping hackers at bay with the preservation of on-demand data availability to its owner.

Who actually has access to my data? How many copies are there? Where are they? What’s the attack vector of my data? More fundamentally these days, what’s its financial and carbon cost?

These are all dimensions of what an ‘ultra-secure’ Cloud storage platform needs, to persist agile data and code wherever the consuming data owner or process needs.

These are the dimensions Ionburst provides.