Scalability in Cloud Computing: Old Problems With New Solutions


Scalability is just one aspect of DevOps that will be covered in future posts.

Software is eating the world, and especially the world of systems management.  As we evolve to a world of Infrastructure-as-Code, it becomes imperative the technology powering complex, web-facing applications be automated for high availability, compliance, and auto scaling.  Over the past several years at Logicworks, we built a dedicated team of DevOps experts who use the most advanced features of Amazon Web Services, along with tools like Puppet, GitHub, Jenkins, and others to automate system management for our clients.   The result is highly customized code that automates the provisioning of specific infrastructure resources for applications- when and where they are required.

Over the next few months, we will publish a series on the practice of DevOps.  This series will address the DevOps approach, and specific tools and processes our team uses to deliver responsive infrastructure to our clients.  We hope you find the series informative.

Learn more about Logicworks DevOps practice.

Jason McKay
Vice President of Engineering, Logicworks

The ability to scale on demand is one of the biggest advantages of cloud computing. Often, when considering the range of benefits of cloud, it is difficult to conceptualize the power of scaling on-demand, but organizations of all kinds enjoy tremendous benefits when they correctly implement auto scaling. Many of the issues and challenges experienced before the advent of cloud are no longer issues: Engineers now working on cloud implementations remember working at companies that feared the Slashdot effect – a massive influx of traffic that would cause servers to fail.

With AWS auto scaling, we can greatly reduce the risks associated with traffic overflow causing server failure. Furthermore, and somewhat contrary to our intuition, auto scaling can reduce costs as well. Instead of running instances based on projected (assumed) usage and leaving excess resources in place as a buffer, we only run resources matched to actual usage, on a moment-to-moment basis.

These price and scalability advantages are not without their own complexities. While we can scale on demand, applications need to be able to scale with the environment. This might seem straightforward when running a website benefitting from an elastic load balancer distributing traffic across multiple instance that scales with increased demand. Yet, there are other considerations that need to be made when accounting for scaling session information, uploads, and data.

Comparing to legacy IT management, the most important paradigm shift in cloud computing is that in the cloud systems should become transitory, and anything on them needs to be completely and immediately replaceable. Working with AWS, there are tools to facilitate this process. For instance, rather than storing data locally, use S3, AWS’ storage solution. If your business cannot move systems and data onto S3, a distributed file system may need to be considered. Session information should no longer be using a local file store; rather consider using ElastiCache or RDS to save your sessions.

The issues around scalability are not new problems. However, with AWS and cloud-based engineering techniques, the solution can be found in adopting a new way of looking at an old problem. Historically, approaches like forking became the way to scale up your application, with other approaches, like threads, following. Because data was written to disk upon application termination, programmers always focused only on how to make an application that scales, without much regard for what to do with the data in memory. Now, with auto scaling, the systems being scaled simply become CPU and Memory, and developers write data to a long-term store.

While configuration management may be most widely adopted, there are many ways to getting a system from zero to 100% scalability. But no matter what approach is being used, the ability to scale is limited only by the ability of the application to scale with it.

In many ways this is a complete paradigm shift for the governing principles of most IT departments: there was a time when a system administrator would show the uptime on his systems with pride. These days, we take equal pride in high availability. This evolution should be considered a new state of the art, as IT shifts from system to application management.

Read Part 2 on The Practice of DevOps Series

By Ben Maynard

Leave a Reply

Your email address will not be published. Required fields are marked *