What is Cloud Cost Optimization?

Cloud Cost Optimization is a process, ideally automated, that involves the ongoing review of cloud costs. Successful optimization results in waste identification & remediation, and constant downward pressure on cloud bills.

To keep downward pressure on cloud costs and avoid runaway bills, successful teams need to be continuously asking key questions:

 

  • How can I reduce my Azure & AWS bill?
  • How much of our cloud bill is waste?
  • How can I identify the waste?
  • How can I automate prevention & control?
  • Who is responsible for cost control?

How Do I Reduce Cloud Costs?

These five cost reduction best practices all have a common foundation: getting a tool to do the heavy lifting and then showing the Cloud Architect the cost data, in context, of your cloud architecture.

If you don’t have a tool that makes this easy then the Cloud Architect will be much slower doing this, more frustrated and find fewer opportunities for cost optimization.

It’s like a car engineer having a specialist diagnostic tool compared to one that doesn’t: who do you want to fix your car?

One will be fast and accurate and one will not.

The cloud-native tools might be free but they are “raw”.

Seeing a chart on a dashboard with the label “EC2” is useless, and drilling down to an unintelligible instance name isn’t much use: where is this zombie EC2 instance, in which application, which zone? Who owns it?

You need to see cost information IN your cloud architecture.

Having costs in one tool, the cloud architecture diagram in another tool (or worse, statically produced instead of a live view), and the cloud console detail in another screen is a frustrating and error-prone experience for Cloud Architects.

1. Eliminate Zombies to Reduce Your Cloud's Footprint

This practice answers the question: How do I find Unused or Unattached Resources?

Remember you pay for what you order, not what you use -- unless you are using serverless compute like Lambda or serverless databases like Aurora serverless where you pay per transaction.

Did you know that even when you stop an EC2 instance you’re still paying for that instance’s EBS storage? How do you know which of the hundreds or thousands of EBS storage meet these criteria?

So, in a nutshell, you need to buy things that are the correct size and turn stuff off when you’re not using them. This is a tedious process for humans so you need a decent tool to do the scanning for you and send you notifications of cost reduction opportunities.

Examples of “unused or unattached waste” are things that are spun up for a test then forgotten and left live but unused later. Think unattached IP addresses, for example, that cost you money that adds up quickly.

Hyperglance not only looks for these but it links discovered zombies directly to your cloud map much like Google Map overlays restaurants and petrol stations and lets you interact with them.

A zombie load balancer discovered by Hyperglance
Hyperglance showing the zombie load balancer on a map - the instance is not active

2. Rightsize Idle Resources

This practice answers the questions: 'How do I identify and Consolidate Idle Resources?' and 'How do I right-size computing services like EC2 instances?'

It was common in the old on-premises world to have virtual machines running at 3% CPU capacity; they “felt free” and the cost wasn’t obvious.

In the cloud, costs are obvious and painful.

If you repeat the on-premises behaviors of just deploying virtual machines that do one thing only (e.g. webserver) and they are lightly utilized, you need to either reduce the size of the VM or combine multiple services.

The cloud architect is key here to make sure the design accommodates this.

An old-world, on-premises architecture will cause this cost wastage.

By looking at your cloud map in Hyperglance, these wasteful resources are highlighted in context by overlaying their performance on top of the instance in the cloud architecture, making the cloud architect's job much simpler.

Hyperglance automated advanced search rules for idle EC2 instances

🤓 What's the FinOps Framework? Find out in our guide to FinOps.

3. Set Up Automated Monitoring

A Cloud Architect should spend their waking hours watching a dashboard or manually searching for savings.

It’s essential that they can program an automated “extra member of the team” to always look for this waste.

That is, every time a new kind of waste is found, a rule should be created to always look for that waste from now on.

This gives the Cloud Architect the confidence to say in a meeting, “We not only discovered this waste and eliminated it once, but we will now avoid it in future because we’ve taught the system to look for it.”

All of the Hyperglance dashboard items are automated rules engine checks.

Hyperglance Automated Search for Idle Gateways

4. Take a Global View To Find Anomalies

If you only run services in the US and you spot a large high-performance EC2 instance mining bitcoin in Asia then you might have a problem.

What about if you run services across more than one cloud?

The Hyperglance Cost Explorer makes it easy to see what’s happening in places other than your commonly used zones.

Without this “global map” and using cloud-native tools you’ll have to purposefully flip from region to region to check for usage - this can be a painfully slow process and, for that reason, people tend to put it off.

First, you can see two clouds are in use across many regions and zones in the Explorer interactive map:

Hyperglance AWS cost explorer

By clicking on these items you can explore the map interactively.

For example, by clicking on the Amazon EC2 cloud resource on the right, I can see which regions and accounts where that resource is being used:

Hyperglance AWS cost explorer

If you only ever use the US regions and non-US regions appear… this is a great way to spot rogue cloud use and trigger a Cloud Architect to disable cloud regions in future.

5. Codify Your Cloud Cost Policies

Cloud cost guardrails are essential.

The best way to create them is to code your cloud policies into an automated monitoring engine.

For example, you might have a rule to eliminate orphan EBS snapshots that are over 30 days old.

This can be codified like this:

Hyperglance automating discovery of orphaned EBS volumes

Finding this will probably uncover an inefficient cloud practice of people not cleaning up after themselves.

This kind of rule lets the Cloud Architect implement the “Trust but Verify” practice: trust people to clean up, but have a safety net for when they don’t.

Interested in product updates, cloud news & tips?

Join the 5,700+ cloud professionals who have signed up for our free newsletter.

By subscribing, you're agreeing that Hyperglance can email you news, tips, updates & offers. You can unsubscribe at any time.

Cloud Optimization Best Practices

Cloud cost optimization is essential for organizations looking to maximize the value of their cloud investments and ensure efficient resource utilization.

Now that you know what to do to reduce cloud costs, here are a few key cloud optimization best practices that will keep things moving quickly and efficiently. By implementing these best practices, you can expect to optimize cloud costs, avoid unnecessary spending, and achieve better financial efficiency in your cloud operations.

  1. Continuously Educate and Engage Teams: Foster a culture of cost optimization by educating and involving teams in cloud cost management. Encourage developers, engineers, and business stakeholders to be mindful of cost implications when making decisions related to cloud resources.
  2. Utilize Cloud Management Tools: Leverage cloud management and cost optimization tools to automate cost analysis, resource management, and rightsizing. These tools provide actionable recommendations and help track and control cloud spending.
  3. Monitor and Analyze Cloud Costs: Regularly review and analyze cloud cost reports and dashboards to identify trends, anomalies, and areas for optimization. Gain insights into cost patterns, usage patterns, and potential cost-saving opportunities.

Remember - Regular monitoring, analysis, and continuous improvement are the key to effective cloud cost optimization.

🎧 Treat your ears to our list of the best FinOps podcasts

Why is Cloud Cost Optimization Important?

The superpower of the cloud is its dynamic nature and seemingly infinite capacity - but as all good superheroes know, with great power comes great responsibility.

The responsibility for cloud cost usually falls first onto the shoulders of the Cloud Architect. In addition to their technical role, cloud forces architects to stretch their skillset to understand cloud finance. The failure of a Cloud Architect to control cloud costs risks a large unexpected cloud bill that will damage a cloud project’s survival. At its worst, a runaway cost may even threaten the company’s survival.

In the short history of the cloud, there are already numerous infamous stories detailing spiraling costs and angry finance teams (and boards!). Take this example, where Adobe was losing nearly $100k a day.

As this article demonstrates, cloud bills can quickly become eye-watering. Worse still, the growth in costs is frequently waste-led, not value-led.

The report states that “The median cloud spend for small-to-medium businesses is around $120K, and 10% of SMBs are spending $1.2 million or more”.

cloud spend growth chart
A recent “State of the Cloud” report by Flexera finds that cloud users are wasting 35% of their spend which means that if you’re an SMB spending $120K on cloud services, the size of your cloud waste is $42,000! That’s a full-time employee headcount.

The causes of cloud waste are many, but common and simple ones include:

 

  1. Turning on expensive-per-hour resources like large EC2 instances, not using them, not turning them off, even forgetting about them. This can cost thousands of dollars in a short space of time.
  2. Not understanding the “not so obvious” cloud costs like network egress fees or the cost of unused IP addresses or “not obvious snapshots”.
  3. People using the cloud who aren’t suitably trained.

Who is Responsible for Cloud Cost Analysis & Optimization?

It normally starts with the Cloud Architect.

You can’t really tackle your cloud costs without them because they have to build cost reduction into the architecture (choosing the right size of resource) and they also know the most about the cloud.

The AWS Well-Architected Framework was written for Cloud Architects; one of the framework's six pillars is dedicated to Cost Optimization.

This is the person who has learned most about the cloud, probably has a professional-level cloud certificate, and is responsible for designing how cloud resources are deployed to support an application.

The secret to cloud cost reduction is building cost-saving practices into the architecture. Nobody else can do that other than the cloud architect. The AWS Well-Architected Framework Cost Optimization Pillar tells them what they need to do.

Is Cloud Cost Optimization Difficult?

Not at all.

You don’t need to be certified in FinOps to be good at it.

You don’t need to be an accountant or a cloud cost specialist.

All you need to be is a Cloud Architect with the right actions and tools, with an appreciation of the AWS Well-Architected Framework as a minimum, to save your company a lot of money in the cloud.

Obviously, the bigger your cloud bill and the bigger your cloud wastage then there’s a point where it makes sense to focus an expert on this.

This begs the question... Do you need a FinOps team to help?

Not always.

In fact, if you build FinOps too early you may be spending more on headcount than you save in cloud costs.

The Cloud Architect implementing some sane cost practices is a good foundation for future FinOps if it becomes applicable.

How Often Do I Need To Optimize?

If you're using a tool that involves relatively heavy lifting from a human, we'd suggest monthly checks as a minimum. Ideally, you'd be checking in real-time, so every second without automated monitoring adds risk.

When it comes to repetitive tasks, humans are complacent at best, and negligent at worst. The best (and lowest-risk) way to maintain constant downward pressure on cloud costs is to use a third-party tool. These tools can use pre-defined customizable rules to monitor your cloud 24/7, fixing issues as they occur.

What Cloud Cost Analysis Tools Can I Use to Reduce My Cloud Bill?

Firstly, each cloud provider has valuable (mostly) free options, e.g. AWS Cost Explorer.

These tend to be dashboard-style charts and tables that visualize your costs and allow you to dig down into specific areas.

To some degree, these rely on your architecture being well managed (think tagging strategy). To a larger degree, these tools require human effort. Maximizing the value of these tools is challenging.

Best-in-class cost optimization for AWS & Azure is only possible using third-party tools.

Not only are these tools biased towards lower cloud bills, but they dig far deeper into your costs and save you time.

When looking for third-party tools, e.g. Hyperglance, make sure it includes these features:

  • Multi-cloud coverage
  • Real-time monitoring & alerting
  • Pre-defined, customizable rule-based automations
  • Visualization of your costs with the ability to overlay them onto a diagram of your architecture
  • Reserved Instance (RI) recommendations (AWS)

Hyperglance & Cloud Cost Optimization

If you're looking to improve your cloud cost optimization, Hyperglance is the perfect place to start.

Hyperglance gives you complete cloud management enabling you to have confidence in your security posture and cost management whilst providing you with enlightening, real-time architecture diagrams.

Monitor security & compliance, manage costs & reduce your bill, interactive diagrams & inventory, built-in automation. Save time & money and get complete peace of mind.

Experience it all, for free, with a 14-day trial.

stephen lucas hyperglance cpo

About The Author: Stephen Lucas

As Hyperglance's Chief Product Officer, Stephen is responsible for the Hyperglance product roadmap. Stephen has over 20 years of experience in product management, project management, and cloud strategy across various industries.

Follow Stephen on LinkedIn >