Why Containerizing Legacy Identity Infrastructure Matters

Containers are a proven and ubiquitous virtualization technology that is here to stay. Learn how they can help you modernize your IAM.

6
 min read

In recent years medium-to-large organizations have had to undergo significant transformation in order to become more competitive. One of the more ambitious lines of action has been to improve operational efficiency by "going cloud"— leveraging applications, platforms and infrastructure-as-a-service.

Legacy identity infrastructure reminds me of the “elephant in the room”. You know it’s there, it’s taking up a lot of space, and it takes a lot of effort to deal with. On a day to day level, you’d like to just avoid it - but tiptoeing around and constantly making accommodations just to keep it functional is becoming less and less feasible. Most legacy identity infrastructure is hosted either on-premises or private clouds, executing on top of either bare metal servers or hardware virtualization platforms such as vSphere from VMWare, and others. Package that ponderous (but necessary) software code with only the operating system (OS) libraries and the dependencies it requires into one lightweight executable that will consistently run on any infrastructure, and you’ve got a “container”.  More portable than a virtual machine (VM), resource-efficient containers are a proven and ubiquitous technology - and a great way to deal with that infrastructure elephant and ease down the path to modernizing your IAM.

But let's start at the beginning. In this post I'll explore some typical approaches toward incremental migration to the cloud, and how those fall short within the context of migrating identity and authentication services.

"Lift-and-shift" Business Applications

Business applications are coupled with legacy identity infrastructure such as Active Directory and products approaching their end-of-life (EOL). Although it’s possible to address the former using migration tooling offered by the cloud vendor (Azure for Active Directory) this does not work for the EOL products. Adopting a "lift-and-shift" approach for business applications, with the expectation that they will play nice with a more modern identity infrastructure, is a recipe for failure. The IT ecosystems upon which they rely no longer exist or, if they do exist, they don’t comply integration-wise with those business applications. So, how can we effectively move our business applications to a modern cloud environment in a successful way?

"Lift-and-shift" Legacy Identity Infrastructure

Since we cannot simply expect our business applications to work in the new environment, one idea is to "lift-and-shift" our legacy identity infrastructure to the cloud as-is, so that everything matches our previous settings, while also operating beyond the perimeter. In practice, this will translate to setting up several virtual machines in the chosen infrastructure-as-a- service (IaaS) such as AWS or Azure, to host a mirror of our legacy identity infrastructure. This is not as easy as it may seem, since we'll have to create a snapshot image of the existent VMs (if we're virtualizing) and port them to whatever hypervisor the cloud service provider is using. Once we're able to run the identity infrastructure in the cloud, configuration will be impacted, since the environment changes cannot be fully isolated. Typically, those will happen through some sort of administration UI and/or through configuration descriptors, misaligning with automation opportunities.

Once everything is up and running successfully, how can we make the work of provisioning a functioning identity infrastructure repeatable and reproducible, to take advantage of automation?  

Embracing Infrastructure as Code

Let's have a look at the Wikipedia definition for Infrastructure as code:

Infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.[1] The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers, as well as virtual machines, and associated configuration resources.

This is typically implemented by encoding either infrastructure provisioning actions scripts, configuration files, or both. There are many good quality open source stacks to use for this, such as Packer, Terraform, and Ansible, to name a few. By taking this approach, you'll be able to gradually roll out identity infrastructure settings quickly and consistently.

Although most organizations already take advantage of this method for their line of business IT,  applying it to legacy identity infrastructure poses some challenges:

  • The identity infrastructure was deployed manually—likely using a GUI—at least 10 to 15 years ago, before infrastructure-as-code (IaC) was a thing. Therefore, the kinds of detailed setups originally applied are not available, which in turn requires them to be reverse engineered.
  • There are many moving pieces: middleware, authentication and authorization services, and directories to name a few; each having a significant footprint and tight coupling.
  • Although most setup actions can be performed from the command line interface (CLI), others cannot, forcing us to have to deal with low level product APIs, which increases complexity.

Using Remote Virtual Machines (VM) only may not be good enough

Most organizations with a relatively complex IT ecosystem will likely migrate their on-premises VMs to cloud-based ones, since the match is one-to-one. However, with legacy identity products, more than a VM will typically be required. For instance, with a typical CA Siteminder implementation, there will be a VM for the Secure Proxy Server (SPS), one for the Policy Server, one for the Policy Store and one for the Admin UI. Assuming identity services are expected to be highly available and scalable, at least one more VM per each component will need to be allocated.

Even with an IaC approach, building VM images, then spinning and maintaining them will translate to a negative impact on your cloud infrastructure bill, not to mention making the IAM infrastructure difficult to consume by software developers and DevOps personnel.  Operating separate staging and development environments will increase your maintenance burden and costs by an order of magnitude. Other than production, the remaining environments will be consumed only occasionally, and eventually be down; yet because the infrastructure is still allocated, the full price will have to be paid.

So, how can we reduce our cloud infrastructure footprint and lower the associated costs? How can we increase agility to make our legacy identity services more easily available to consumers like developers and DevOps teams?

Shrinking the elephant: containers to the rescue

Container technology is considered a significant leap forward. It improves efficiency and portability, while reducing the overhead with traditional virtualization. Docker containers, specifically, are ubiquitous. Since all major public cloud IaaS providers support Docker, organizations can reduce costs and complexity while letting the cloud provider handle the infrastructure. Instead of treading ever so lightly around the ungainly obstacles of elephantine legacy apps, we can use Docker to containerize them so they are easily transported to an environment where they can be useful.

What does this buy us in terms of our IAM migration to the cloud?

First off, one single virtual machine can run any number of containers. So, we could allocate one VM to run all of our staging, testing and development environments and a second one for production. Instead of having a dedicated VM per each IAM service, now we simply use containers, promoting resource optimization and cost improvements as the containers share resources (e.g. an operating system) with the host.

Secondly, development and DevOps teams can now run a replica of a full- blown legacy identity infrastructure either on their preferred cloud or even in their own workstation.

Thirdly, as spinning up a docker image is lightning fast (it can take less than one second!) once we're done with, for example, testing a single sign-on (SSO) integration with the legacy authentication services running in the cloud, we can just go ahead and tear down the environment. That means no invoices for unused capacity, as well as more resources available to other containerized IAM services.

Finally, your legacy identity infrastructure can align from Day 1 with your DevOps efforts and Software Development Lifecycle (SDLC). No need to complete your IAM modernization in order to start seeing the benefits. The room seems much more spacious without that elephant in the middle, doesn't it?

How can you get started ?

Our next blog post will walk you through the process of enabling Infrastructure-as-Code and Containerization with CA Siteminder.

Subscribe to our newsletter now!

Thanks for joining our newsletter.
Oops! Something went wrong.