August 22, 2024

OpenAI GPT-4 API Pricing

The Evolution of OpenAI GPT-4 API Pricing: The Drops from 2023 to 2024

One of the key stories in the 2024 Large Language Model (LLM) market has been the increased competition, leading OpenAI and other providers to make their top models much more affordable. Starting with the launch of GPT-4 in March 2023, we’ve seen consistent price drops from OpenAI that make these technologies more accessible to a wider range of businesses and use cases.

Key Price Reductions

  • March 2023: GPT-4 Launch
    • When GPT-4 was released, it came with a high price—$60 per million tokens for output and $30 per million tokens for input. This was cutting-edge technology, but the cost was a barrier for many.
  • November 2023: GPT-4 Turbo
    • OpenAI introduced GPT-4 Turbo, a more affordable option at $30 per million tokens for output and $10 per million tokens for input. This model offered better capabilities at half the cost, making it easier for businesses to adopt AI.
  • March 2024: GPT-4o
    • OpenAI continued to lower costs with the launch of GPT-4o, priced at $15 per million tokens for output and $5 per million tokens for input. This made high-quality AI againa more affordable.
  • July 2024: GPT-4o Mini
    • In response to the need for a more cost-effective solution, OpenAI introduced GPT-4o Mini in July 2024. Specifically built to offer a budget-friendly option, GPT-4o Mini is priced at $0.600 per 1 million output tokens and $0.15 per million input tokens. This model competes directly with other small language models like Google’s Gemini Flash and Anthropic’s Claude Haiku. GPT-4o Mini is designed for applications where cost-efficiency is key, making it a better comparison against these smaller models rather than larger ones like GPT-4o.
  • August 2024: GPT-4o Price Cut
    • The price of GPT-4o dropped even further to $10 per million tokens for output and $3 per million tokens for input, marking a significant reduction and making it one of the most cost-effective options on the market.

In just 16 months, we've seen an 83 % price drop with output tokens from $60 / 1 million to $10 / 1 million tokens and a 90 % drop with input tokens from $30 / 1 million to $3 / 1 million tokens.

OpenAI Batch API: 50 % cheaper

OpenAI also offers its language models through the Batch API, which provides a more cost-effective solution for certain tasks. The Batch API allows you to send asynchronous groups of requests at a 50% lower cost. Here’s how the pricing breaks down:

  • $1.875 per million input tokens
  • $7.500 per million output tokens

This API is ideal for processing jobs that don’t require immediate responses, for example running evaluations, as it offers a clear 24-hour turnaround time. Along with the lower costs, the Batch API provides a separate pool of significantly higher rate limits, making it a great option for businesses that need to process large volumes of data.

Competitive Landscape

OpenAI isn’t the only one cutting prices. The price cuts have been industry wide with every model on every hosting/ cloud platform. For example, Google’s Gemini models on Google Cloud and Meta’s Llama series on Azure have seen similar price reductions, making advanced AI more accessible across the board.

Smaller Language Models like the GPT-4o Mini, Google’s Gemini Flash and Anthropic’s Claude Haiku have also introduced another layer to the pricing competition.

Overall, the increased competition, different models and processes like Batch API are great for AI adoption. This has made LLMs much more accessible for different types of use cases. Whether it’s a flagship model, smaller model or the batch API process that suits your use case, make sure to take the benefit out of the evolving market conditions.

Conclusion

Over the last year, the cost of using OpenAI’s GPT models has dropped significantly. By staying informed about these changes, and exploring options like the Batch API and smaller models you can take advantage of the best deals and optimize your AI investment.


About Nebuly

Nebuly is an LLM user-experience platform. We help companies continuously improve and personalize LLM experiences by capturing valuable insights from AI user interactions. If you're interested in enhancing your LLM user experience, we'd love to chat. Please book a demo meeting with us HERE.

Other Blogs

View pricing and plans

SaaS Webflow Template - Frankfurt - Created by Wedoflow.com and Azwedo.com
blog content
Keep reading

Get the latest news and updates
straight to your inbox

Thank you!
Your submission has been received!
Oops! Something went wrong while submitting the form.