how.wtf

How to count Amazon Bedrock Claude tokens step-by-step guide

· Thomas Taylor

Counting Amazon Bedrock Claude tokens

Monitoring token consumption in Anthropic-based models can be straightforward and hassle-free. In fact, Anthropic offers a simple and effective method for accurately counting tokens using Python!

In this guide, I’ll show you how to count tokens for Amazon Bedrock Anthropic models.

Installing the Anthropic Bedrock Python Client

To begin, install the Amazon Bedrock Python client using pip:

1pip install anthropic-bedrock

For more information about the library, visit the technical documentation here.

Create the Amazon Bedrock connection

Instantiating the client is straightforward: use your AWS credentials to authenticate.

Since the AnthropicBedrock client uses botocore for authentication, you may use any AWS provider of your choosing:

Here is the code snippet from the documentation:

 1import anthropic_bedrock
 2from anthropic_bedrock import AnthropicBedrock
 3
 4client = AnthropicBedrock(
 5    # Authenticate by either providing the keys below or use the default AWS credential providers, such as
 6    # using ~/.aws/credentials or the "AWS_SECRET_ACCESS_KEY" and "AWS_ACCESS_KEY_ID" environment variables.
 7    aws_access_key="<access key>",
 8    aws_secret_key="<secret key>",
 9    # Temporary credentials can be used with aws_session_token.
10    # Read more at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html.
11    aws_session_token="<session_token>",
12    # aws_region changes the aws region to which the request is made. By default, we read AWS_REGION,
13    # and if that's not present, we default to us-east-1. Note that we do not read ~/.aws/config for the region.
14    aws_region="us-east-1",
15)

For the sake of this guide, I set my AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. My client is using the default region of us-east-1:

1import anthropic_bedrock
2from anthropic_bedrock import AnthropicBedrock
3
4client = AnthropicBedrock()

Listing model access

The completions Anthropic API requires a model id. To determine what models are accessible on your AWS account, run the following command using the aws cli:

1aws bedrock list-foundation-models --by-provider anthropic --query "modelSummaries[*].modelId"

Output:

1[
2    "anthropic.claude-instant-v1",
3    "anthropic.claude-v1",
4    "anthropic.claude-v2"
5]

Calling Claude on Bedrock using the client

Using the anthropic.claude-instant-v1 model id from above, here’s how to call it using the completions API:

import anthropic_bedrock
from anthropic_bedrock import AnthropicBedrock

client = AnthropicBedrock()

completion = client.completions.create(
    model="anthropic.claude-instant-v1",
    prompt=f"{anthropic_bedrock.HUMAN_PROMPT} Tell me a funny cowboy joke {anthropic_bedrock.AI_PROMPT}",
    max_tokens_to_sample=2000,
)

print(completion.completion)

Output:

1 Here's one: Why don't cowboys like to eat beef jerky in the desert? Because it's too chewy!

In the code above, I’m calling the client.completions.create function and supplying it:

  1. The model id as a string
  2. The prompt using the Anthropic provided constants: anthropic_bedrock.HUMAN_PROMPT and anthropic_bedrock.AI_PROMPT
  3. The max tokens to sample is the maximum number of tokens to generate before stopping.

The prompt above resolved to this:

1import anthropic_bedrock
2
3print(
4    f"{anthropic_bedrock.HUMAN_PROMPT} Tell me a funny cowboy joke {anthropic_bedrock.AI_PROMPT}"
5)

Output:

1
2
3Human: Tell me a funny cowboy joke
4
5Assistant:

For more information about asynchronous or streamed responses, please refer to the technical documentation here

Counting tokens using the client

Using the example before, we can count the token usage using a static method provided by Anthropic.

1from anthropic_bedrock import AnthropicBedrock
2
3client = AnthropicBedrock()
4print(client.count_tokens("Hello world!"))

As of November 2023, Anthropic charges based on prompt tokens and completion tokens.

For tracking tokens, you can leverage this method to estimate:

 1import anthropic_bedrock
 2from anthropic_bedrock import AnthropicBedrock
 3
 4client = AnthropicBedrock()
 5
 6prompt = f"{anthropic_bedrock.HUMAN_PROMPT} Tell me a funny cowboy joke {anthropic_bedrock.AI_PROMPT}"
 7prompt_tokens = client.count_tokens(prompt)
 8print(prompt_tokens)
 9
10result = client.completions.create(
11    model="anthropic.claude-instant-v1",
12    prompt=prompt,
13    max_tokens_to_sample=2000,
14)
15
16completion = result.completion
17completion_tokens = client.count_tokens(completion)
18print(completion_tokens)

Output:

114
228

#Generative-Ai   #Python  

Reply to this post by email ↪