AWS FinOps · Tooling Limitations
Why AWS Cost Tools Don't Actually Reduce Your Bill
You've looked at Cost Explorer. Maybe you tried Trusted Advisor or a third-party platform. The bill is still high. This is the implementation gap - the distance between a cost recommendation and actual savings - and every AWS cost tool stops short of closing it.
The Implementation Gap
Every AWS cost tool - native or third-party - is built to do one thing: identify problems. None of them implement solutions. The steps between "this recommendation exists" and "the bill went down" require human action that tools cannot take.
Tool identifies a recommendation← Tools stop here
Human interprets what the recommendation means in context
Human prioritises it against other recommendations by ROI
Human translates it into a Terraform/CDK change
Change goes through code review and deployment
Savings Plan or RI purchase is timed and sized correctly
Bill actually goes down
An Honest Assessment of Every Major Tool
Each tool has genuine value in the right context. The problem is when teams use them as a substitute for action, rather than as an input to action.
AWS Cost Explorer
NativeFree (+ $0.01/query for advanced)What it does
Visualises historical and forecast spend by service, account, tag, and usage type. Provides Savings Plan purchase recommendations.
What it doesn't do
Does not implement any changes. Recommendations require manual interpretation, prioritisation, and implementation. No cross-account aggregation without additional setup.
Verdict
Essential for visibility. Zero savings without human action on the data.
AWS Trusted Advisor
NativeFree (limited) / Business/Enterprise Support required for cost checksWhat it does
Flags specific idle resources, over-provisioned instances, and unused Reserved Instances. Simple checklist format.
What it doesn't do
Requires Business Support ($100/month minimum) for all cost checks. No cross-service analysis, no dollar impact estimates, no prioritisation. Recommendations sit in a dashboard until manually acted upon.
Verdict
Good signal for obvious issues. Misses data transfer, architecture, and commitment strategy entirely.
AWS Compute Optimizer
NativeFree (enhanced with $0.0003360/resource/hour)What it does
Analyses EC2, Lambda, EBS, and ECS usage and recommends rightsizing. Includes Graviton migration suggestions.
What it doesn't do
Only covers compute and storage. Ignores data transfer, NAT Gateway, API costs, RDS, and the vast majority of AWS services. Recommendations expire and reset; no historical tracking.
Verdict
Excellent for compute rightsizing specifically. Not a substitute for full-account analysis.
CloudHealth / Finout / similar
Third-party$12,000–15,000/year for $500K cloud spendWhat it does
Dashboard aggregating multi-cloud spend. Tagging governance. Showback and chargeback reports. RI/SP inventory tracking.
What it doesn't do
Mostly repackages native AWS recommendations. Does not implement fixes. Engineers still need to act on every finding. High licensing cost relative to value for startups under $500K/month.
Verdict
Appropriate for enterprises managing multi-cloud at scale. Poor ROI for Series A–C startups.
AWS Cost Anomaly Detection
NativeFreeWhat it does
Machine learning-based alerting when spend is unusually high. Sends alerts within 24 hours of anomalies.
What it doesn't do
Detection only - no investigation, no root cause analysis, no remediation. Alerts tell you something went wrong; they don't tell you what to do.
Verdict
Set it up immediately. It's free and catches spikes before they compound. But it won't reduce your baseline.
Why DIY Optimisation Captures Only Part of Available Savings
Even when engineering teams take the tool recommendations seriously and act on them, they typically capture only 20–40% of available savings. Here's why.
Wrong optimisation order
Tools surface visible problems: large instances, obviously idle resources. Most DIY efforts start with these, achieving partial savings. The higher-ROI changes - Savings Plan strategy, data transfer architecture, cross-service interactions - require specialised analysis that tools don't provide and engineers don't prioritise.
No cross-service view
Cost Explorer shows service-level spend. It doesn't show that your Lambda function is routing through NAT Gateway to reach S3 (fixable with a free VPC endpoint), or that your EKS cluster's cross-AZ traffic is generating $800/month in hidden transfer charges. These patterns require a full-account analysis, not per-service tool checks.
Recommendation overload without prioritisation
Trusted Advisor might surface 40 findings. Compute Optimizer might flag 15 instances. Engineers look at the list, don't know where to start, and either tackle the most familiar findings or deprioritise the whole thing. Without ROI-ordered prioritisation, the highest-value changes often get skipped.
Missing the commitment strategy layer
The largest single lever in AWS cost optimisation is often Savings Plans - up to 72% discount on compute for 1-year commitments. Most DIY efforts under-purchase because the risk of over-committing feels real. Correctly sizing a Savings Plan against actual usage patterns requires analysis that goes beyond what any native tool provides.
What a Consultant Does That Tools Can't
The value of a FinOps consultant isn't access to better tools. It's the analysis layer between the data and the implementation.
Tools provide
A consultant adds
Common questions
Should I use these tools at all?
Yes - especially the free native ones. Cost Anomaly Detection is free and should be set up immediately. Cost Explorer is essential for visibility. The point isn't that the tools are useless; it's that tools alone won't produce savings without the human analysis and implementation layer on top.
We're already paying for Business Support - does Trusted Advisor cover everything?
Business Support unlocks all Trusted Advisor checks, which is genuinely useful. But Trusted Advisor's cost checks are limited to individual resource recommendations. It won't surface data transfer architecture issues, cross-service interactions, Savings Plan strategy, or the prioritised ROI ordering that turns a list of findings into an implementation plan.
How is a cost audit different from running these tools myself?
An audit uses these tools as data inputs, then applies cross-service analysis, usage pattern modelling, and multi-account aggregation that the tools don't do. The output is a prioritised report with task cards - specific changes, expected savings, implementation complexity, and risk - rather than a list of raw recommendations. The audit closes the implementation gap.