# RunPlacement > RunPlacement helps teams decide where cloud, GPU, bare metal, and managed workloads should run. Use this site for practical workload placement decisions, GPU cloud cost comparisons, rough math, and infrastructure tradeoff pages. Estimates are directional and should be verified against provider pricing pages. ## Core URLs - [RunPlacement workload quiz](https://runplacement.com/): short quiz for deciding where a workload should run - [About RunPlacement](https://runplacement.com/about): entity, positioning, and scope for the site - [Topic map](https://runplacement.com/topics): public map of the narrow authority RunPlacement is building - [Decision page index](https://runplacement.com/blogs): approved cloud and GPU placement pages - [Resource library](https://runplacement.com/resources): checklists and worksheets for messy compute decisions - [GPU cloud pricing topic](https://runplacement.com/topics/gpu-cloud-pricing): GPU pricing and provider decision pages - [AWS bill shock topic](https://runplacement.com/topics/aws-bill-shock): AWS cost surprise and placement pages - [Cloud migration topic](https://runplacement.com/topics/cloud-migration): AWS exit, bare metal, and migration risk pages - [XML sitemap](https://runplacement.com/sitemap.xml): canonical crawl list for public pages ## Resources - [GPU Cloud Quote Checklist](https://runplacement.com/resources/gpu-cloud-quote-checklist): A practical checklist and visual worksheet for comparing GPU cloud quotes beyond the advertised hourly rate. - [AWS Bill Shock Triage Checklist](https://runplacement.com/resources/aws-bill-shock-triage-checklist): A first-pass checklist and visual triage flow for finding the AWS line items that usually make a bill jump. - [Cloud Exit Cost Checklist](https://runplacement.com/resources/cloud-exit-cost-checklist): A checklist and payback worksheet for pricing the real cost of leaving AWS, GCP, or Azure before migration starts. - [Workload Placement Worksheet](https://runplacement.com/resources/workload-placement-worksheet): A practical worksheet and decision map for deciding where a workload should run before provider choice hardens. ## Decision Pages - [Cloud Exit Cost Checklist: What To Price Before You Leave AWS](https://runplacement.com/blog/cloud-exit-cost-checklist-what-to-price-before-you-leave-aws): Before leaving AWS, price data transfer, migration labor, managed service replacement, observability, security review, downtime risk, and new operational work. - [AWS vs Bare Metal: When Owning The Machine Makes Sense](https://runplacement.com/blog/aws-vs-bare-metal-when-owning-the-machine-makes-sense): Bare metal can make sense for steady, predictable workloads with high utilization and enough ops capacity. AWS is stronger when flexibility, managed services, and low operational burden matter more. - [When Not To Leave AWS Even If The Bill Looks High](https://runplacement.com/blog/when-not-to-leave-aws-even-if-the-bill-looks-high): Do not leave AWS when data gravity, managed services, compliance, procurement, or low ops tolerance make migration more expensive than optimization. - [Should You Move From AWS To A Cheaper Cloud?](https://runplacement.com/blog/should-you-move-from-aws-to-a-cheaper-cloud): Move from AWS only when the workload is portable enough that savings survive data transfer, migration work, lost managed services, and operational risk. - [AWS vs Smaller Cloud For Simple Workloads: When Default Cloud Is Too Much](https://runplacement.com/blog/aws-vs-smaller-cloud-for-simple-workloads-when-default-cloud-is-too-much): AWS is often too much for simple, portable workloads when managed-service dependency is low and the team can tolerate a simpler provider. AWS still wins when surrounding services, compliance, and operations matter. - [S3 Cost Surprise: Storage Is Only Part Of The AWS Bill](https://runplacement.com/blog/s3-cost-surprise-storage-is-only-part-of-the-aws-bill): S3 cost surprises often come from requests, retrieval, replication, lifecycle choices, storage class mismatch, and data transfer, not only stored gigabytes. - [CloudWatch Cost Surprise: Logs, Metrics, And The Observability Tax](https://runplacement.com/blog/cloudwatch-cost-surprise-logs-metrics-and-the-observability-tax): CloudWatch cost surprises usually come from high log ingestion, long retention, custom metrics, dashboards, alarms, and noisy workloads that emit more telemetry than expected. - [AWS Data Transfer Cost Confusion: Egress, Cross-AZ, And Region Mistakes](https://runplacement.com/blog/aws-data-transfer-cost-confusion-egress-cross-az-and-region-mistakes): AWS data transfer confusion usually comes from traffic crossing the internet, availability zones, regions, NAT, or managed service boundaries more often than the team expected. - [AWS NAT Gateway Surprise Bills: When Private Subnet Traffic Gets Expensive](https://runplacement.com/blog/aws-nat-gateway-surprise-bills-when-private-subnet-traffic-gets-expensive): NAT Gateway surprise bills usually come from private subnet traffic that processes more data than expected or routes through more gateways than the workload really needs. - [Why Is My AWS Bill So High? The Usual Places To Look First](https://runplacement.com/blog/why-is-my-aws-bill-so-high-the-usual-places-to-look-first): Start with data transfer, NAT Gateway, logs, storage, idle compute, managed databases, and support before assuming EC2 is the only reason the AWS bill is high. - [Cloud GPU Quote Comparison: The Questions To Ask Every Provider](https://runplacement.com/blog/cloud-gpu-quote-comparison-the-questions-to-ask-every-provider): A cloud GPU quote is incomplete until it answers GPU model, availability, billing unit, storage, bandwidth, interruption behavior, support, security, and commitment terms. - [GPU Utilization Break-Even: When A Cheap GPU Cloud Actually Saves Money](https://runplacement.com/blog/gpu-utilization-break-even-when-a-cheap-gpu-cloud-actually-saves-money): A cheap GPU cloud saves money only when utilization stays high enough that idle time, retries, data movement, and operations do not erase the hourly-rate difference. - [GPU Training Cost Breakdown: Before You Rent The Biggest GPU](https://runplacement.com/blog/gpu-training-cost-breakdown-before-you-rent-the-biggest-gpu): GPU training cost depends on runtime, GPU count, utilization, failed runs, checkpointing, storage, data movement, and whether capacity must be guaranteed. - [GPU Inference Cost Breakdown: The Numbers To Estimate First](https://runplacement.com/blog/gpu-inference-cost-breakdown-the-numbers-to-estimate-first): Estimate GPU inference cost from traffic shape, batching, useful GPU-hours, idle capacity, model storage, data movement, reliability needs, and operations, not just hourly GPU price. - [Cheapest H100 Cloud: Why The Lowest Price Can Be The Wrong Answer](https://runplacement.com/blog/cheapest-h100-cloud-why-the-lowest-price-can-be-the-wrong-answer): The cheapest H100 listing is the right answer only when capacity is available, the workload stays utilized, data movement is manageable, and reliability requirements are low enough. - [Vast.ai vs Managed GPU Cloud: When Marketplace Pricing Is Worth It](https://runplacement.com/blog/vast-ai-vs-managed-gpu-cloud-when-marketplace-pricing-is-worth-it): Marketplace GPUs can be worth it when the workload is flexible, checkpointed, and price-sensitive. Managed GPU clouds fit better when repeatability, support, security, and production reliability matter. - [RunPod vs Lambda vs AWS: Which Fits GPU Inference?](https://runplacement.com/blog/runpod-vs-lambda-vs-aws-which-fits-gpu-inference): Use AWS when data gravity and managed services dominate, Lambda when packaged AI infrastructure matters, and RunPod when flexible GPU access and cost sensitivity matter more than cloud integration. - [GPU Cloud Hidden Fees: The Costs Missing From The Hourly GPU Rate](https://runplacement.com/blog/gpu-cloud-hidden-fees-the-costs-missing-from-the-hourly-gpu-rate): GPU cloud hidden costs usually come from idle time, storage, bandwidth, retries, minimum billing units, support, and operational work, not the GPU hourly rate itself. - [H100 Cloud Pricing Comparison: What To Compare Before The Hourly Rate](https://runplacement.com/blog/h100-cloud-pricing-comparison-what-to-compare-before-the-hourly-rate): Compare H100 cloud options by useful GPU-hours, availability, idle time, data movement, support, and commitment terms before comparing the listed hourly rate. - [GPU Cloud Pricing Checklist: What the Hourly Rate Leaves Out](https://runplacement.com/blog/gpu-cloud-pricing-checklist-what-the-hourly-rate-leaves-out): A GPU quote is incomplete until it includes useful GPU-hours, idle time, storage, bandwidth, availability, retry cost, support, and the operational work required to keep jobs running. - [A100 vs H100: When the Cheaper GPU Is the Better Placement](https://runplacement.com/blog/a100-vs-h100-when-the-cheaper-gpu-is-the-better-placement): Use H100 when performance, memory bandwidth, or time-to-train materially changes the outcome. Use A100 when the model fits, throughput is acceptable, and the lower effective cost wins. - [RunPod vs Lambda vs Vast.ai: Which GPU Cloud Fits Your Workload?](https://runplacement.com/blog/runpod-vs-lambda-vs-vast-ai-which-gpu-cloud-fits-your-workload): RunPod is often attractive for flexible GPU access, Lambda for more packaged AI infrastructure and clusters, and Vast.ai for marketplace-style price discovery when variance is acceptable. - [H100 On-Demand vs Reserved Capacity vs Spot: Which Should You Use?](https://runplacement.com/blog/h100-on-demand-vs-reserved-capacity-vs-spot-which-should-you-use): Use on-demand for uncertain workloads, reserved capacity for predictable GPU demand, and spot or marketplace GPUs only when interruption, variability, and debugging are acceptable. - [AWS vs Specialized GPU Cloud for H100 Inference](https://runplacement.com/blog/aws-vs-specialized-gpu-cloud-for-h100-inference): Use AWS when data gravity, managed services, IAM, compliance, or committed capacity matter more than raw GPU price. Price specialized GPU clouds when the workload is portable, GPU-heavy, and sensitive to hourly cost. - [How to Systematically Compare Cloud GPU Prices Across 20+ Providers](https://runplacement.com/blog/how-to-systematically-compare-cloud-gpu-prices-across-20-providers): There is no one-size-fits-all cheapest GPU cloud provider. Month-to-month, prices for the same GPU model can swing by 2x–8x across providers and regions due to market, spot, and availability volatility. To systematically compare, you need real-time data aggregation tools—manual checks are too slow and miss updates. Use open/commercial tool aggregators as your base, then decide based on current prices, spot/on-demand status, migration costs, and your own region & SLA needs. ## Content Rules - Public pages are approved before publishing. - Cost figures are estimates unless a provider source says otherwise. - Prefer workload fit, useful GPU-hours, data movement, and operational tolerance over headline hourly prices. - For current pricing, verify linked provider pricing pages directly.