I remember the good old days when provisioning a new server for production was a big deal. All sorts of teams were involved in the purchasing process, and by the time a purchase order was approved, all participants celebrated their birthday twice. The server was usually scaled up, with the center of interest being the CPU and memory, to support future resource demands. The business leaders and supporters of this concept, typically make a compelling case for the best motherboard, maximum memory, the fastest CPU with the most cores, and the best looking rack. It’s not surprising to conclude that, from a cost-savings perspective, the idea of having an overprovisioned system that is underutilized, is a waste of money and resource. It’s quite understandable why this practice is still in place today; IT professionals are slow to respond to demand signals. However, cloud computing provides excellent flexibility to meet demand signals by automatically auto-scale and de-scale resources as a result of an event breaching a defined threshold.
It’s time to move away from hardware hoarding and embrace the cloud and its elasticity
Given the advances in Cloud computing, it’s surprising to me that a lot of companies are still trying to grasp what it is. The idea of having data in the cloud seems obscure, especially for a culture used to hoarding hardware. Additionally, control is usually not given up so quickly without a fight. People who are not susceptible to change, believe that they will lose control of their systems by moving the company’s data and application to the cloud. At the center of this debate are scrutinies regarding security and accessibility. Oh wait, those scrutinies are still applicable to on-prem data centers as well.
Security is a hot topic these days. Following the recent data breaches of some of the most massive US corporations, companies are reviewing their security practices and implementing policies to mitigate data breaches. Protecting the company from external threat is usually the highest priority; however, most companies failed to execute robust internal security measures to protect the company from its employees. Human capital will always be a company’s most valuable asset and ironically its riskiest. Policies that exist to protect against internal misuse of company resources and data, usually are not enforced as they should be. Security should be at the heart of every company, and policies should be applied holistically to safeguard the company from both external and internal threats. Why should a company continue to use outdated security practices to secure today’s challenges? Security is everyone’s business; it should be cohesive with the development process and enforced by everyone from the bottom-up as well as a top-down with business leaders as the primary sponsors.
Everyone wants immediate access to their data and having data stored in the Cloud could draw scrutiny from business leaders. When requested, will data be immediately available and will latency be an issue? The answer to this question will depend on the architecture of the system and processes. A design that doesn’t scale well will eventually lead to poor performance and disrupt services. That’s why it is imperative that a lot of time be spent architecting systems and processes – both on-prem and in the cloud. Cloud technologies such as AWS CloudFront, Elasticache, Lambda, and Edge locations can help alleviate latency. However, a virtual cloud doesn’t implement these services automatically – they must be carefully architected in the design of the system and manually configured.
System design will play an imperative role when moving to the cloud. I’ve witnessed business leaders slandered cloud technology while ignoring design flaws in their applications. On one occasion, a client was frequently accessing images in an AWS Glacier vault – a service whose purpose is long-term archival storage. The process of writing and retrieving images was slow for obvious reasons, and the client blamed Glacier as the culprit. The change was quite simple; the application was modified to use AWS S3 storage for frequently access requests. This anecdote is a perfect example of a process that was designed using a cloud service that was not intended to be frequently accessed. That’s why the design process is so important. Accessibility will always be dependent on system design and security policies governing it.