how.wtf

A step-by-step guide on how to use the Amazon Bedrock Converse API

· Thomas Taylor

On May 30th, 2024, Amazon announced the release of the Bedrock Converse API. This API is designed to provide a consistent experience for “conversing” with Amazon Bedrock models.

The API supports:

  1. Conversations with multiple turns
  2. System messages
  3. Tool use
  4. Image and text input

In this post, we’ll walk through how to use the Amazon Bedrock Converse API with the Claude Haiku foundational model.

Please keep in mind that not all foundational models may support all the features of the Converse API. For more details per model, see the Amazon Bedrock documentation.

Getting started

To get started, let’s install the boto3 package.

1pip install boto3

Next, we’ll create a new Python script and import the necessary dependencies.

1import boto3
2
3client = boto3.client("bedrock-runtime")

Step 1 - Starting a conversation

To start off simple, let’s send a single message to Claude.

 1import boto3
 2
 3client = boto3.client("bedrock-runtime")
 4
 5messages = [{"role": "user", "content": [{"text": "What is your name?"}]}]
 6
 7response = client.converse(
 8    modelId="anthropic.claude-3-haiku-20240307-v1:0",
 9    messages=messages,
10)
11
12print(response)

The response is a dictionary containing the response and other metadata information from the API. For more information regarding the output, please refer to the Amazon Bedrock documentation.

For the purposes of this post, I’ll print the response as JSON.

 1{
 2  "ResponseMetadata": {
 3    "RequestId": "6984dcf2-c6aa-4000-a3d6-22e34a43df12",
 4    "HTTPStatusCode": 200,
 5    "HTTPHeaders": {
 6      "date": "Sun, 02 Jun 2024 14:54:08 GMT",
 7      "content-type": "application/json",
 8      "content-length": "222",
 9      "connection": "keep-alive",
10      "x-amzn-requestid": "6984dcf2-c6aa-4000-a3d6-22e34a43df12"
11    },
12    "RetryAttempts": 0
13  },
14  "output": {
15    "message": {
16      "role": "assistant",
17      "content": [
18        {
19          "text": "My name is Claude. It's nice to meet you!"
20        }
21      ]
22    }
23  },
24  "stopReason": "end_turn",
25  "usage": {
26    "inputTokens": 12,
27    "outputTokens": 15,
28    "totalTokens": 27
29  },
30  "metrics": {
31    "latencyMs": 560
32  }
33}

To retrieve the contents of the message, we can access the output key in the response.

 1import boto3
 2
 3client = boto3.client("bedrock-runtime")
 4
 5messages = [{"role": "user", "content": [{"text": "What is your name?"}]}]
 6
 7response = client.converse(
 8    modelId="anthropic.claude-3-haiku-20240307-v1:0",
 9    messages=messages,
10)
11
12ai_message = response["output"]["message"]
13output_text = ai_message["content"][0]["text"]
14print(output_text)

Output:

1My name is Claude. It's nice to meet you!

Step 2 - Continuing a conversation

Let’s continue the conversation by appending the AI’s message to the original list of messages. This will allow us to have a multi-turn conversation.

 1import boto3
 2
 3client = boto3.client("bedrock-runtime")
 4
 5messages = [{"role": "user", "content": [{"text": "What is your name?"}]}]
 6
 7response = client.converse(
 8    modelId="anthropic.claude-3-haiku-20240307-v1:0",
 9    messages=messages,
10)
11
12ai_message = response["output"]["message"]
13messages.append(ai_message)
14
15# Let's ask another question
16
17messages.append({"role": "user", "content": [{"text": "Can you help me?"}]})
18response = client.converse(
19    modelId="anthropic.claude-3-haiku-20240307-v1:0",
20    messages=messages,
21)
22
23print(response["output"]["message"]["content"][0]["text"])

Output:

1Yes, I'd be happy to try and help you with whatever you need assistance with. What can I help you with?

Step 3 - Using images

The Amazon Bedrock Converse API supports images as input. Let’s send an image to Claude and see how it responds. I’ll download an image of a cat from Wikipedia and send it to claude.

For this example, I used the requests library to download the image. If you don’t have it installed, you can install it using pip install requests.

1pip install requests
 1import boto3
 2import requests
 3
 4client = boto3.client("bedrock-runtime")
 5
 6messages = [{"role": "user", "content": [{"text": "What is your name?"}]}]
 7
 8response = client.converse(
 9    modelId="anthropic.claude-3-haiku-20240307-v1:0",
10    messages=messages,
11)
12
13ai_message = response["output"]["message"]
14messages.append(ai_message)
15
16messages.append({"role": "user", "content": [{"text": "Can you help me?"}]})
17response = client.converse(
18    modelId="anthropic.claude-3-haiku-20240307-v1:0",
19    messages=messages,
20)
21ai_message = response["output"]["message"]
22messages.append(ai_message)
23
24image_bytes = requests.get(
25    "https://upload.wikimedia.org/wikipedia/commons/4/4d/Cat_November_2010-1a.jpg"
26).content
27messages.append(
28    {
29        "role": "user",
30        "content": [
31            {"text": "What is in this image?"},
32            {"image": {"format": "jpeg", "source": {"bytes": image_bytes}}},
33        ],
34    }
35)
36response = client.converse(
37    modelId="anthropic.claude-3-haiku-20240307-v1:0",
38    messages=messages,
39)
40
41ai_message = response["output"]["message"]
42print(ai_message)

Output:

1{
2  "role": "assistant",
3  "content": [
4    {
5      "text": "The image shows a domestic cat. The cat appears to be a tabby cat with a striped coat pattern. The cat is sitting upright and its green eyes are clearly visible, with a focused and alert expression. The background suggests an outdoor, snowy environment, with some blurred branches or vegetation visible behind the cat."
6    }
7  ]
8}

As you can see, the AI was able to identify the image as a cat and provide a detailed description of the image within a conversational context.

Step 4 - Using a single tool

For this section, let’s start a new conversation with Claude and provide tools it can use.

 1import boto3
 2
 3client = boto3.client("bedrock-runtime")
 4
 5tools = [
 6    {
 7        "toolSpec": {
 8            "name": "get_weather",
 9            "description": "Get the current weather in a given location",
10            "inputSchema": {
11                "json": {
12                    "type": "object",
13                    "properties": {
14                        "location": {
15                            "type": "string",
16                            "description": "The city and state, e.g. San Francisco, CA",
17                        },
18                        "unit": {
19                            "type": "string",
20                            "enum": ["celsius", "fahrenheit"],
21                            "description": "The unit of temperature, either 'celsius' or 'fahrenheit'",
22                        },
23                    },
24                    "required": ["location"],
25                }
26            },
27        }
28    },
29]
30messages = [
31    {
32        "role": "user",
33        "content": [{"text": "What is the weather like right now in New York?"}],
34    }
35]
36response = client.converse(
37    modelId="anthropic.claude-3-haiku-20240307-v1:0",
38    messages=messages,
39    toolConfig={"tools": tools},
40)
41
42print(response["output"])

Output:

 1{
 2  "message": {
 3    "role": "assistant",
 4    "content": [
 5      {
 6        "text": "Okay, let me check the current weather for New York:"
 7      },
 8      {
 9        "toolUse": {
10          "toolUseId": "tooluse_rRwaOoldTeiRiDZhTadP0A",
11          "name": "get_weather",
12          "input": {
13            "location": "New York, NY",
14            "unit": "fahrenheit"
15          }
16        }
17      }
18    ]
19  }
20}

The output includes a toolUse object that indicates to us, the developers, that the AI is using the get_weather tool to fetch the current weather in New York. We must now fulfill the tool request by responding with the weather information.

But first, let’s build a simplistic router that can handle the tool request and respond with the weather information.

1def get_weather(location: str, unit: str = "fahrenheit") -> dict:
2    return {"temperature": "78"}
3
4def tool_router(tool_name, input):
5    match tool_name:
6        case "get_weather":
7            return get_weather(input["location"], input.get("unit", "fahrenheit"))
8        case _:
9            raise ValueError(f"Unknown tool: {tool_name}")

Now, let’s update the code to handle the tool request and respond with the weather information.

 1import boto3
 2
 3def get_weather(location: str, unit: str = "fahrenheit") -> dict:
 4    return {"temperature": "78"}
 5
 6def tool_router(tool_name, input):
 7    match tool_name:
 8        case "get_weather":
 9            return get_weather(input["location"], input.get("unit", "fahrenheit"))
10        case _:
11            raise ValueError(f"Unknown tool: {tool_name}")
12
13client = boto3.client("bedrock-runtime")
14
15tools = [
16    {
17        "toolSpec": {
18            "name": "get_weather",
19            "description": "Get the current weather in a given location",
20            "inputSchema": {
21                "json": {
22                    "type": "object",
23                    "properties": {
24                        "location": {
25                            "type": "string",
26                            "description": "The city and state, e.g. San Francisco, CA",
27                        },
28                        "unit": {
29                            "type": "string",
30                            "enum": ["celsius", "fahrenheit"],
31                            "description": "The unit of temperature, either 'celsius' or 'fahrenheit'",
32                        },
33                    },
34                    "required": ["location"],
35                }
36            },
37        }
38    },
39]
40messages = [
41    {
42        "role": "user",
43        "content": [{"text": "What is the weather like right now in New York?"}],
44    }
45]
46response = client.converse(
47    modelId="anthropic.claude-3-haiku-20240307-v1:0",
48    messages=messages,
49    toolConfig={"tools": tools},
50)
51
52ai_message = response["output"]["message"]
53messages.append(ai_message)
54
55if response["stopReason"] == "tool_use":
56    contents = response["output"]["message"]["content"]
57    for c in contents:
58        if "toolUse" not in c:
59            continue
60
61        tool_use = c["toolUse"]
62        tool_id = tool_use["toolUseId"]
63        tool_name = tool_use["name"]
64        input = tool_use["input"]
65
66        tool_result = {"toolUseId": tool_id}
67        try:
68            output = tool_router(tool_name, input)
69            if isinstance(output, dict):
70                tool_result["content"] = [{"json": output}]
71            elif isinstance(output, str):
72                tool_result["content"] = [{"text": output}]
73            # Add more cases, such as images, if needed
74            else:
75                raise ValueError(f"Unsupported output type: {type(output)}")
76        except Exception as e:
77            tool_result["content"] = [{"text": f"An unknown error occurred: {str(e)}"}]
78            tool_result["status"] = "error"
79
80        message = {"role": "user", "content": [{"toolResult": tool_result}]}
81        messages.append(message)
82
83    response = client.converse(
84        modelId="anthropic.claude-3-haiku-20240307-v1:0",
85        messages=messages,
86        toolConfig={"tools": tools},
87    )
88
89print(response["output"])

Output:

 1{
 2  "message": {
 3    "role": "assistant",
 4    "content": [
 5      {
 6        "text": "According to the weather data, the current temperature in New York, NY is 78 degrees Fahrenheit."
 7      }
 8    ]
 9  }
10}

Great! We have successfully responded to the AI tool request and provided the weather information for New York.

Step 5 - Using multiple tools

For reference, I’m adapting the Anthropic AI Tool examples from their documentation to the Bedrock Converse API.

In the previous example, we only used one tool to fetch the weather information. However, we can use multiple tools in a single conversation.

Let’s add another tool to the conversation to fetch the current time and introduce a loop to handle multiple tool requests.

  1import boto3
  2
  3def get_weather(location: str, unit: str = "fahrenheit") -> dict:
  4    return {"temperature": "78"}
  5
  6def get_time(timezone: str) -> str:
  7    return "12:00PM"
  8
  9def tool_router(tool_name, input):
 10    match tool_name:
 11        case "get_weather":
 12            return get_weather(input["location"], input.get("unit", "fahrenheit"))
 13        case "get_time":
 14            return get_time(input["timezone"])
 15        case _:
 16            raise ValueError(f"Unknown tool: {tool_name}")
 17
 18client = boto3.client("bedrock-runtime")
 19
 20tools = [
 21    {
 22        "toolSpec": {
 23            "name": "get_weather",
 24            "description": "Get the current weather in a given location",
 25            "inputSchema": {
 26                "json": {
 27                    "type": "object",
 28                    "properties": {
 29                        "location": {
 30                            "type": "string",
 31                            "description": "The city and state, e.g. San Francisco, CA",
 32                        },
 33                        "unit": {
 34                            "type": "string",
 35                            "enum": ["celsius", "fahrenheit"],
 36                            "description": "The unit of temperature, either 'celsius' or 'fahrenheit'",
 37                        },
 38                    },
 39                    "required": ["location"],
 40                }
 41            },
 42        }
 43    },
 44    {
 45        "toolSpec": {
 46            "name": "get_time",
 47            "description": "Get the current time in a given timezone",
 48            "inputSchema": {
 49                "json": {
 50                    "type": "object",
 51                    "properties": {
 52                        "timezone": {
 53                            "type": "string",
 54                            "description": "The IANA time zone name, e.g. America/Los_Angeles",
 55                        }
 56                    },
 57                    "required": ["timezone"],
 58                }
 59            },
 60        }
 61    },
 62]
 63messages = [
 64    {
 65        "role": "user",
 66        "content": [
 67            {
 68                "text": "What is the weather like right now in New York and what time is it there?"
 69            }
 70        ],
 71    }
 72]
 73response = client.converse(
 74    modelId="anthropic.claude-3-haiku-20240307-v1:0",
 75    messages=messages,
 76    toolConfig={"tools": tools},
 77)
 78
 79ai_message = response["output"]["message"]
 80messages.append(ai_message)
 81
 82tool_use_count = 0
 83while response["stopReason"] == "tool_use":
 84    if response["stopReason"] == "tool_use":
 85        contents = response["output"]["message"]["content"]
 86        for c in contents:
 87            if "toolUse" not in c:
 88                continue
 89
 90            tool_use = c["toolUse"]
 91            tool_id = tool_use["toolUseId"]
 92            tool_name = tool_use["name"]
 93            input = tool_use["input"]
 94
 95            tool_result = {"toolUseId": tool_id}
 96            try:
 97                output = tool_router(tool_name, input)
 98                if isinstance(output, dict):
 99                    tool_result["content"] = [{"json": output}]
100                elif isinstance(output, str):
101                    tool_result["content"] = [{"text": output}]
102                # Add more cases such as images if needed
103                else:
104                    raise ValueError(f"Unsupported output type: {type(output)}")
105            except Exception as e:
106                tool_result["content"] = [
107                    {"text": f"An unknown error occurred: {str(e)}"}
108                ]
109                tool_result["status"] = "error"
110
111            message = {"role": "user", "content": [{"toolResult": tool_result}]}
112            messages.append(message)
113
114        response = client.converse(
115            modelId="anthropic.claude-3-haiku-20240307-v1:0",
116            messages=messages,
117            toolConfig={"tools": tools},
118        )
119        ai_message = response["output"]["message"]
120        messages.append(ai_message)
121        tool_use_count += 1
122
123print(tool_use_count)
124print(response["output"])

Output:

 1{
 2  "message": {
 3    "role": "assistant",
 4    "content": [
 5      {
 6        "text": "The current time in New York is 12:00 PM.\n\nSo in summary, the weather in New York right now is 78 degrees Celsius, and the time is 12:00 PM."
 7      }
 8    ]
 9  }
10}

Tool use count: 2

We have successfully responded to the AI’s tool requests and provided the weather and time information for New York.

Conclusion

In this post, we learned how to use the Amazon Bedrock Converse API to augment converations with AI models. Not only did we leverage text and images, but we also used tools to simulate fetching external data.

The Amazon Bedrock Converse API is a powerful tool that can be used to build conversational AI applications!

#Generative-Ai   #Python  

Reply to this post by email ↪