how.wtf

Using Claude 3 Opus with Anthropic API in Python

· Thomas Taylor

With the recent release of the Claude 3 family, this was the perfect opportunity to use the Claude API access that I was recently granted.

The Claude 3 family boasts a huge improvement from the prior family, Claude 2. The introductory post goes into greater detail.

In this post, we’ll explore how to invoke Claude 3 Opus using the Anthropic SDK.

Getting started

For the purposes of this post, we’ll leverage the Python Anthropic SDK.

1pip3 install anthropic

To authorize requests, please export the ANTHROPIC_API_KEY:

1export ANTHROPIC_API_KEY="sk..."

How to invoke Claude 3 Opus

The API for invoking Anthropic models is simple.

The model id we’ll use is the Opus release from Feb. 29th, 2024: claude-3-opus-20240229.

If you wish to target other models, please refer to Anthropic’s model page for more information.

 1from anthropic import Anthropic
 2
 3model = "claude-3-opus-20240229"
 4
 5client = Anthropic()
 6
 7message = client.messages.create(
 8    max_tokens=1024,
 9    messages=[{"role": "user", "content": "Hello! How are you?"}],
10    model=model,
11)
12print(message.content)
13print(f"Input tokens: {message.usage.input_tokens}")
14print(f"Output tokens: {message.usage.output_tokens}")

Output:

1[ContentBlock(text="Hello! I'm doing well, thank you for asking. As an AI language model, I don't have feelings, but I'm functioning properly and ready to assist you with any questions or tasks you may have. How can I help you today?", type='text')]
2Input tokens 13
3Output tokens 53

In the example above, the Anthropic() client was instantiated and automatically referenced the environment variable from earlier.

Then, using the Messages API, we created a new message.

How to invoke Claude 3 Opus with Streaming

Similarly to before, we also have the option to stream output using Anthropic’s API.

 1from anthropic import Anthropic
 2
 3model = "claude-3-opus-20240229"
 4
 5client = Anthropic()
 6
 7stream = client.messages.create(
 8    max_tokens=1024,
 9    messages=[
10        {
11            "role": "user",
12            "content": "Hello! How are you?",
13        }
14    ],
15    model=model,
16    stream=True,
17)
18
19message = ""
20input_tokens = None
21output_tokens = None
22for event in stream:
23    match event.type:
24        case "message_start":
25            input_tokens = event.message.usage.input_tokens
26        case "content_block_start":
27            message += event.content_block.text
28        case "content_block_delta":
29            message += event.delta.text
30        case "message_delta":
31            output_tokens = event.usage.output_tokens
32        case "content_block_stop" | "message_stop":
33            ...
34
35print(message)
36print(f"Input tokens: {input_tokens}")
37print(f"Output tokens: {output_tokens}")

Output:

1Hello! As an AI language model, I don't have feelings, but I'm functioning well and ready to assist you. How can I help you today?
2Input tokens: 13
3Output tokens: 35

For the sake of the example, I concatenated the “text”-based events together to showcase the full output.

For asynchronous eventing, please refer to the Anthropic SDK documentation.

#python   #generative-ai  

Reply to this post by email ↪