One of the primary advantages of moving from the data center to the cloud is the speed and flexibility to make changes to your infrastructure. As anyone who has managed infrastructure at scale can tell you, however, this flexibility means you often end up keeping one eye on your bill at all times.
As public cloud vendors evolve, the increasing complexity of the ecosystem makes managing your spending incredibly difficult. We regularly see customers services deployed in dozens of regions and hundreds of accounts, across multiple vendors with multiple layers of custom pricing agreements.
With our Anomaly Detection service, we'll let you know if we see something unusual. Paired with our Budgets & Forecasting service, you can be confident that if something has changed in your cloud infrastructure, you'll be the first to know about it.
Minimal Setup, Immediate Action
We've designed our system to require almost no setup. We look for anomalies in every service across AWS and Azure, so there's no need to opt-in particular services. This is important, because we often see anomalies where somebody is kicking the tires on a new service and forgets to turn it off, or doesn't fully understand how billing works.
We look at how you've historically tagged your infrastructure so that anomalies reflect tags you're already using. If you generally do a good job of using the Team tag, we'll see that and report anomalies with the particular Team value identified. This has the added bonus of reporting spikes in untagged spending.
Typically we see billing files updated 4-6 times per day. Because it's important to catch some unusual spending patterns as quickly as possible, we check for anomalies immediately upon receiving an updated billing file.
We've calibrated a variety of algorithms to check for spending patterns that depart from the norm. Because most humans don't want to talk about Holt-Winters smoothing parameters, autocorrelation functions, seasonal-trend decomposition or even median absolute deviation thresholds, our service requires no tuning of any model parameters.
Instead, if we see an anomaly of at least $500, we'll send you an email. If you're getting too many or too few anomalies, you can adjust this dollar value threshold in your email subscription. In the UI, you'll always be able to see all your anomalies.
I see the anomaly but I want to dig deeper. How do I do that?
You can always click-through to our reporting interface to add additional dimensions or change the date range for further investigation.
What if I have an anomaly that spans multiple tags values or accounts?
By our definition, an anomaly is for a single set of tag values, account and service. A single underlying event could easily trigger multiple anomalies. For example, an out of control ETL process could have anomalies associated with S3 API Requests, EMR Instance Usage and EC2 Data Transfer. The same is true for anomalies that span multiple tags.
We don't use tags. Will anomaly detection still work?
It will, but you should really use tags! We'll still surface anomalies but without tags there with be less specificity. For example, imagine you have multiple teams using EBS Storage (which you almost certainly do). If Team Barb disappears while Team Power & Light increases dramatically, these two changes would potentially cancel each other out at the aggregate level.
What about Views?
Of course! We often see people set lower alert thresholds for particular teams/services and use views to draw those boundaries.