AWS S3 Cost Optimization · Startup Guide

AWS S3 Cost Optimization: Reduce Storage Costs 40–95%

S3 appears on every AWS bill - and on most, it’s 2–3× more expensive than it needs to be. Lifecycle policies, Intelligent-Tiering, and eliminating invisible waste can cut S3 costs dramatically with zero performance impact.

Glacier Deep Archive: 96% cheaper than Standard
S3 Gateway Endpoint eliminates NAT costs
Incomplete multipart uploads billed invisibly
Versioned objects accumulate silently

S3 Storage Classes: Cost Comparison

Most teams use S3 Standard for everything. Moving data to the right class based on access frequency is the biggest lever.

Storage ClassStorage CostRetrieval FeeBest for
S3 Standard$0.023/GB/moFreeFrequently accessed data, < 30 days old
S3 Standard-IA$0.0125/GB/mo$0.01/GBAccessed less than monthly (logs, backups > 30 days)
S3 One Zone-IA$0.01/GB/mo$0.01/GBReproducible data, single AZ acceptable
S3 Glacier Instant$0.004/GB/mo$0.03/GBQuarterly access (audit logs, compliance data)
S3 Glacier Flexible$0.0036/GB/mo$0.01/GB + delayAnnual access (long-term archival)
S3 Glacier Deep Archive$0.00099/GB/mo$0.02/GB + 12h delay7–10 year archival, rarely accessed

5 S3 Cost Optimizations Ranked by ROI

Apply these in order - highest savings first.

1

Add lifecycle policies to transition infrequent data

30 minutes · S3 console or TerraformSaves $300–5,000/month

S3 Standard costs $0.023/GB/month. S3 Standard-IA costs $0.0125/GB/month (45% cheaper) for data accessed less than once a month. S3 Glacier Instant Retrieval costs $0.004/GB/month (83% cheaper) for data accessed quarterly. Lifecycle policies automate these transitions - set once, save forever.

How to implement

  1. Identify buckets with large volumes of objects older than 30 days (S3 Storage Lens → Bucket activity report)
  2. Add a lifecycle rule: transition to Standard-IA after 30 days, Glacier Instant after 90 days
  3. AWS CLI: aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://lifecycle.json
  4. For log buckets: transition after 30 days, expire (delete) after 365 days
  5. Enable S3 Storage Class Analysis for 30 days first - it tells you which objects are actually accessed infrequently

Note: Log buckets are the most common source of S3 waste. CloudTrail, ALB access logs, VPC flow logs, and application logs accumulate indefinitely by default with no lifecycle policy.

2

Enable S3 Intelligent-Tiering for unpredictable access patterns

15 minutes · Per-bucket settingSaves 20–95% depending on access frequency

S3 Intelligent-Tiering monitors object access and automatically moves objects between tiers (Frequent, Infrequent, Archive Instant, Archive). No retrieval fees for Frequent and Infrequent tiers. Objects inactive for 30 days move to Infrequent automatically - 40% cheaper with no manual work.

How to implement

  1. Enable on buckets where access patterns are mixed or unpredictable (application assets, ML training data, media)
  2. S3 console → bucket → Properties → Intelligent-Tiering Archive configurations
  3. Enable Archive Instant tier (90-day threshold) for deeper savings on rarely accessed objects
  4. Intelligent-Tiering charges a $0.0025/1,000 objects monitoring fee - only worth it for objects > 128 KB
  5. Exclude small objects (< 128 KB) from Intelligent-Tiering using a size filter in the lifecycle rule

Note: Intelligent-Tiering is the best choice when you don&rsquo;t know access patterns. If you know objects are rarely accessed, Standard-IA or Glacier is cheaper (no monitoring fee).

3

Delete incomplete multipart uploads

5 minutes · One CLI commandSaves $50–500/month

When a multipart upload is initiated but not completed (due to application errors, crashes, or timeouts), the uploaded parts sit in S3 and are billed at standard storage rates - invisibly. These can accumulate to hundreds of GB over time.

How to implement

  1. List incomplete uploads: aws s3api list-multipart-uploads --bucket my-bucket
  2. Add a lifecycle rule to abort incomplete multipart uploads after 7 days: set AbortIncompleteMultipartUpload → DaysAfterInitiation: 7
  3. This cleans up existing uploads AND prevents future accumulation
  4. Check all buckets - this issue is almost universal and easy to miss in Cost Explorer

Note: This is a 5-minute fix that appears in nearly every S3 audit. Easy to overlook because incomplete multipart uploads don't appear in normal object listings.

4

Eliminate unnecessary data transfer costs

2–4 hours · Architecture reviewSaves $200–3,000/month

Data leaving S3 to the internet costs $0.09/GB. Data between S3 and EC2 in the same region is free if using a Gateway VPC Endpoint - but costs $0.01/GB cross-AZ if routed through NAT Gateway. Many teams unknowingly pay for same-region traffic that should be free.

How to implement

  1. Create an S3 Gateway VPC Endpoint in your VPC (free - no hourly charge)
  2. Add the endpoint to route tables for all private subnets
  3. Verify with VPC Flow Logs that traffic to S3 now routes through the endpoint
  4. For public S3 reads, consider CloudFront as a CDN - it caches data and reduces origin fetches
  5. Check S3 Transfer Acceleration - disable it if you don't need faster global uploads

Note: S3 Gateway Endpoints are free and completely eliminate NAT Gateway processing charges for S3 traffic. This is always worth doing.

5

Remove duplicate and versioned objects

2–4 hours · S3 Batch OperationsSaves $100–2,000/month

S3 Versioning protects against accidental deletes but accumulates old versions indefinitely. Each version is billed at full storage rates. A bucket with heavy write activity can accumulate 10× the storage cost in old versions without anyone noticing.

How to implement

  1. Enable versioning lifecycle: noncurrent versions → expire after 30 days
  2. S3 console → Lifecycle rules → Add rule → Noncurrent version expiration
  3. Use S3 Batch Operations to delete existing noncurrent versions at scale
  4. For buckets where versioning isn't required, disable it entirely
  5. Use S3 Storage Lens to find buckets with high noncurrent object counts

Note: On actively written buckets with versioning enabled for years, noncurrent objects often represent 60–80% of total storage. Expiring them is the highest-ROI S3 cleanup.

Frequently Asked Questions

What is the cheapest S3 storage class?

S3 Glacier Deep Archive at $0.00099/GB/month - 96% cheaper than S3 Standard. The trade-off is 12-hour retrieval time. For compliance archives you access once a year or never, this is the right choice.

Does S3 Intelligent-Tiering have a minimum object size?

Objects smaller than 128 KB are never transitioned and are always billed at S3 Standard rates. The $0.0025/1,000 objects monitoring fee makes Intelligent-Tiering cost-effective only for objects larger than 128 KB.

Is data transfer between EC2 and S3 in the same region free?

Only if you use an S3 Gateway VPC Endpoint. Without the endpoint, traffic from private subnets routes through NAT Gateway at $0.045/GB data processing charge. The Gateway Endpoint is free to create and eliminates this cost entirely.

How do I see what's driving my S3 costs?

Enable S3 Storage Lens (free tier covers basic metrics). In Cost Explorer, group by Usage Type and filter by S3 - you'll see separate line items for storage, requests, data transfer, and replication. Storage Class Analysis runs for 30 days and shows actual access frequency per storage class.

What is S3 Requester Pays?

A bucket configuration where the requester pays data transfer and request costs instead of the bucket owner. Useful if you host public datasets and don't want to pay for others downloading your data. Most startups don't need this.

Fixed-price · Risk-free · 3× ROI guarantee

Want someone to audit your S3 spend?

The audit covers S3 storage classes, lifecycle policies, data transfer costs, and versioning waste across all your buckets. Prioritized findings in 1 week.

Start the Audit →

No call needed · Accept agreements · Run one script · Done

Prefer to talk first? Free 30-min call available →