AWS Bedrock Cost Calculator

Estimate costs for chat, document processing, and image analysis. And compare Bedrock models side by side.

AWS Bedrock gives you access to Claude, Nova, Llama, Mistral, and now OpenAI models through a single API. But estimating what you'll actually pay is harder than it should be. AWS's pricing calculator works with requests per minute and hours per day. It takes an engineer to convert a project to those units. This calculator starts where you do: with users, conversations, documents, and images. Enter your volumes, pick your model from a live comparison table, and download a shareable cost report. No account required.
Note: Actual usage can vary based on input/output requirements, prompt optimization, model routing, batch processing, and various price-optimization techniques within AWS.
New: OpenAI models on Bedrock
OpenAI's gpt-oss-20b and gpt-oss-120b are now available on AWS Bedrock (both included in this calculator). These are the open-weight models that arrived prior to the headline OpenAI/AWS partnership announcement. For teams already building on Bedrock, they add two more options at competitive pricing, without changing your infrastructure or API setup.

Frequently asked questions

How much does AWS Bedrock cost?

Bedrock charges per token with no upfront commitment. Input and output tokens are billed separately at rates that vary by model. Amazon Nova Micro starts at $0.035 per million input tokens. Claude Opus 4.7 runs $5.00 per million input tokens and $25.00 per million output tokens. Many production workloads fall between $50 and $2,000 per month depending on volume and model choice. Use the calculator above to estimate costs for your specific workload.

How is this different from the AWS pricing calculator?

The AWS pricing calculator models Bedrock costs as requests per minute and compute hours. Those units are useful for billing engineers reconciling a bill. They're awkward for teams deciding what to build. This calculator starts with workload types: chat users per month, documents per month, images per month. Enter your volumes, review available models in a comparison table (and even add models!) and download a PDF cost report. It takes about two minutes.

Which Bedrock model is cheapest?

Amazon Nova Micro is the most cost-effective option for text-only workloads at $0.035 input / $0.14 output per million tokens. For vision tasks, Amazon Nova Lite is the lowest-cost model with native image support at $0.06 input / $0.24 output per million tokens. The comparison table in the calculator above shows cost estimates for every model based on your specific inputs.

Are OpenAI models available on AWS Bedrock?

Yes. OpenAI's gpt-oss-20b and gpt-oss-120b have been available on Bedrock since September 2025. Both are open-weight models designed for text generation, coding, and reasoning tasks. They support a 128K context window and adjustable reasoning levels. OpenAI's frontier models, GPT-5.5 and GPT-5.4, began a limited preview on Bedrock in April 2026, following the end of Microsoft Azure's exclusive hosting arrangement. The gpt-oss models are included in this calculator.

Does Bedrock pricing match the direct Anthropic API?

Yes for Claude models. Anthropic and AWS price identically in standard regions. Cross-region inference on Bedrock adds approximately 10%. The primary reasons to choose Bedrock over calling Anthropic directly are unified AWS billing, IAM access control, VPC support, and enterprise compliance features. The cost difference is rarely the deciding factor.

What is a token in AWS Bedrock?

A token is the unit Bedrock uses to measure text. Four characters or three-quarters of an English word is roughly one token. A 1,000-word document contains approximately 1,300 tokens. Input tokens (your prompt) and output tokens (the model's response) are billed at different rates. The calculator's field hints explain typical token counts for common use cases.

Can I compare multiple models before committing?

That's what the comparison table is for. Enable a workload, enter your volumes, and the table updates in real time to show monthly and annual cost estimates for every available model. You can also compare benchmark scores for coding (SWE-bench), reasoning (GPQA Diamond), and multimodal tasks (MMMU-Pro) to evaluate capability alongside cost. Click column headings to sort results. Click any row to select a model. The cost summary at the bottom updates immediately.

I'm building AI agents, not just running inference. Is there a separate calculator?

Yes. The calculator above covers Bedrock inference costs. If you're building on Amazon Bedrock AgentCore, there are additional runtime, memory, gateway, and observability costs that don't appear here. Use our AgentCore Cost Calculator for a full agent infrastructure estimate.

I'm working on a document processing workflow, not just running inference. How should I calculate costs?

The calculator above covers Bedrock inference costs for document analysis, but that's not the only approach to document processing in AWS. Use our Intelligent Document Processing Cost Calculator for a comparison between Textract, Bedrock Data Automation, and Bedrock models.

Ready to build?

Our AgentCore Accelerate Program and Intelligent Document Processing Program take teams from zero to a production-ready AI deployment in two weeks. AWS funding programs often cover part or all of the cost for eligible projects. Tech 42 is an AWS Advanced Tier Services Partner with 300+ AI and cloud projects delivered.