Calculate OpenAI API Costs With The gpt-tokens Library

A JS library to calculate token usage for the OpenAI API, supporting GPT-3.5, GPT-4, and more to optimize API integration costs.

gpt-tokens is an open-source JavaScript/TypeScript library that makes it easy to estimate token usage and pricing for applications using OpenAI’s GPT models.

It supports a wide range of OpenAI models, including various iterations of ‘gpt-3.5-turbo’ and ‘gpt-4’, making it versatile and adaptable to various applications.

Helpful for developers or organizations that rely heavily on OpenAI’s models for tasks like generating human-like text, translation, or content creation. By providing insights into token consumption, the library allows users to optimize their usage, predict costs more accurately, and manage their budgets effectively.

Github Repo

How to use it:

1. Install and import the gpt-tokens module.

# Yarn
$ yarn add gpt-tokens
# NPM
$ npm i gpt-tokens
import { GPTTokens } from 'gpt-tokens'

2. Calculate the token consumption and amount of OpenAI messages.

const usageInfo = new GPTTokens({
  // Plus enjoy a 25% cost reduction for input tokens on GPT-3.5 Turbo (0.0015 per 1K input tokens)
  plus    : false,
  model   : 'gpt-3.5-turbo-0613',
  messages: [
      {
          'role'   : 'system',
          'content': 'You are a helpful, pattern-following assistant that translates corporate jargon into plain English.',
      },
      {
          'role'   : 'system',
          'name'   : 'example_user',
          'content': 'New synergies will help drive top-line growth.',
      },
      {
          'role'   : 'system',
          'name'   : 'example_assistant',
          'content': 'Things working well together will increase revenue.',
      },
      {
          'role'   : 'system',
          'name'   : 'example_user',
          'content': 'Let\'s circle back when we have more bandwidth to touch base on opportunities for increased leverage.',
      },
      {
          'role'   : 'system',
          'name'   : 'example_assistant',
          'content': 'Let\'s talk later when we\'re less busy about how to do better.',
      },
      {
          'role'   : 'user',
          'content': 'This late pivot means we don\'t have time to boil the ocean for the client deliverable.',
      },
      {
          'role'   : 'assistant',
          'content': 'This last-minute change means we don\'t have enough time to complete the entire project for the client.',
      },
  ]
})
console.table({
  'Tokens prompt'    : usageInfo.promptUsedTokens,
  'Tokens completion': usageInfo.completionUsedTokens,
  'Tokens total'     : usageInfo.usedTokens,
})
console.log('Price USD: ', usageInfo.usedUSD)

3. The library also offers a testGPTTokens module to test the gpt-tokens in your project. Requires OpenAI API key.

import { testGPTTokens } from 'gpt-tokens'
testGPTTokens('Your API Key Here').then()
// [1/11]: Testing gpt-3.5-turbo-0301...
// Pass!
// [2/11]: Testing gpt-3.5-turbo...
// Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613
// Pass!
// [3/11]: Testing gpt-3.5-turbo-0613...
// Pass!
// [4/11]: Testing gpt-3.5-turbo-16k...
// Warning: gpt-3.5-turbo-16k may update over time. Returning num tokens assuming gpt-3.5-turbo-16k-0613
// Pass!
// [5/11]: Testing gpt-3.5-turbo-16k-0613...
// Pass!
// [6/11]: Testing gpt-4...
// Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613
// Pass!
// [7/11]: Testing gpt-4-0314...
// Pass!
// [8/11]: Testing gpt-4-0613...
// Pass!
// [9/11]: Testing gpt-4-32k...
// Ignore model gpt-4-32k: Request failed with status code 404
// [10/11]: Testing gpt-4-32k-0314...
// Ignore model gpt-4-32k-0314: Request failed with status code 404
// [11/11]: Testing gpt-4-32k-0613...
// Ignore model gpt-4-32k-0613: Request failed with status code 404
// Test success!
// Done in 27.13s.

Leave a Reply

Your email address will not be published. Required fields are marked *