Authorization for a Multi-Cloud System
This is a project design I am currently working on to consume SPIFFE(
Secure Production Identity Framework For Everyone) bootstrapped trust and identification to provide authorization in a dynamically scaling, heterogeneous system, inspired by Mr. Prabath Siriwardena from WSO2 and under the supervision of Prof. Gihan Dias from University of Moratuwa. An enterprise system running across multiple clouds, as in the hybrid cloud, is an obvious example that will be benefitted from this. The objective is to open doors for the SPIFFE standard based systems to co-exist with rest of the systems with minimal effort, without compromising on security aspects while having an authorization solution based on SPIFFE.
What is SPIFFE?
In brief, it is a trust bootstrapping and identification framework, submitted as a standard and accepted by CNCF(Cloud Native Computing Foundation)[1]. As of now, this standard has two main implementations as SPIRE and Istio[2], a platform that supports service mesh architecture using SPIFFE for identification aspects. This implementation has taken care of a lot of complexities involved in trust bootstrapping and identification across heterogeneous systems. More details can be read at the
spiffe.io site.
Why OAuth2.0?
OAuth 2.0 is currently the most widely used standard in the API security domain, that is used in access delegation and authorization in the workloads world as well. While SPIFFE is an emerging standard as of now, OAuth 2.0 has been there for a while, and we can say most of the enterprise system have adopted it. Hence if we can blend these two standards, we can best of both worlds and additional power with interoperability provided by OAuth 2.0 and dynamic trust bootstrapping and identification capabilities of SPIFFE.
How the Design Works?
Please note the SPIRE server in the below diagram can be any implementation that supports the SPIFFE standard.
– We assume an enterprise system that consists of workloads residing in two clouds, here we have assumed that is AWS and GCP. If we imagine this as a currently running system in GCP with workloads secured based on OAuth 2.0 scopes, the other workloads that are to consume these should come with valid access tokens and relevant scopes.
– The part of the system running in the AWS cloud can be imagined to be designed newly to run as part of a multi-cloud system. It makes use of SPIFFE standard to uniquely identify the workloads across multiple clouds.
– As part of this SPIFFE based trust bootstrapping and identification, each workload receives an X.509 certificate signed by the SPIRE server, bearing their identifier referred as the SPIFFE ID.
eg.
spiffe://localdomain/us-west/data (This is included as a SAN) [3]
– Here comes OAuth 2.0 into the picture. We depend on the capability of the authorization server to issue an OAuth 2 access token under client credentials grant type. This will be under the MTLS OAuth2.0 specification that is currently in the draft stage[4].
There are few special things happening here,
- MTLS connection is created based on the SPIRE server signed key pairs of the workload. Hence the authorization server and SPIRE server is assumed to have a pre-established trust.
- As the workload creates the MTLS connection with the authorization server, it creates an OAuth 2 client dynamically on the fly, generates OAuth2 secrets and issues a token. At this point, the authorization server should do several validations before issuing these.
- The certificate needs to be validated first, then the content of it needs to be read along with the SPIFFE ID coming in the SAN.
- Just looking at the SPIFFE ID and issuing a token will not suffice for the enterprise use case.
- Hence we are to provide the capability of attaching scopes to these tokens based on a policy defined in authorization server using OPA. (OPA stands for Open Policy Agent, which is much flexible to provide RBAC, ABAC or XACML like complex policies as well.) This policy can consume additionally available data and make decisions.
- After the validation is complete, the authorization server will issue a self-contained access token, including the scopes, expired time etc. that will be sent to the AWS workload, in order to be submitted when calling GCP workloads.
- GCP workloads do not require any additional functions here, other than using its existing mechanism to validate the OAuth 2.0 token and derive any useful information that came with it.
Hope this explains the scenario well. I am to name this solution Dvaara, indicating opening more doors and controlled access. :)
We are open for any feedback, suggestions.
[1] – https://www.cncf.io/blog/2018/03/29/cncf-to-host-the-spiffe-project/
[2] – https://istio.io/docs/concepts/security/#istio-security-vs-spiffe
[3] – A sample SVID https://gist.github.com/Pushpalanka/b70d5057154eb3c34d651e6a4d8f46ee#file-svid-cert
[4] – https://tools.ietf.org/html/draft-ietf-oauth-mtls-12
[5] – https://www.openpolicyagent.org/docs/comparison-to-other-systems.html
Cheers!
Published on Java Code Geeks with permission by Pushpalanka, partner at our JCG program. See the original article here: Authorization for a Multi-Cloud System Opinions expressed by Java Code Geeks contributors are their own. |