5 minute read

Migrating from LangChain’s Experimental ChatAnthropicTools to Official Claude API Tool-Calling: A Developer’s Guide

As LLM capabilities rapidly evolve, the tools and frameworks we use to interact with them must adapt accordingly. With Anthropic’s Claude API now officially supporting tool-calling, LangChain has deprecated its experimental ChatAnthropicTools class that previously served as a workaround for this functionality. This guide will walk you through migrating from the experimental implementation to the official API support.

Understanding the Deprecation

According to LangChain’s documentation, ChatAnthropicTools has been deprecated since version 0.1.5. The reason is simple: tool-calling is now officially supported by the Anthropic API, making this workaround unnecessary. The class will remain available until langchain-anthropic==1.0.0 to ensure a smooth transition period.

The deprecation notice states:

# Deprecated since version 0.1.5: Tool-calling is now officially supported by the Anthropic API 
# so this workaround is no longer needed. Use ChatAnthropic instead. 
# It will not be removed until langchain-anthropic==1.0.0.

Migration Steps

1. Update Your Imports

Previously, you might have imported the experimental class like this:

from langchain_anthropic.experimental import ChatAnthropicTools

Now, simply use the standard ChatAnthropic class:

from langchain_anthropic import ChatAnthropic

2. Update Your Model Initialization

Replace any instances of ChatAnthropicTools with ChatAnthropic:

# Old approach
model = ChatAnthropicTools(
    model="claude-3-opus-20240229",
    anthropic_api_key="your-api-key"
)

# New approach
model = ChatAnthropic(
    model="claude-3-opus-20240229",
    anthropic_api_key="your-api-key"
)

3. Update Tool-Calling Code

The standard ChatAnthropic class now supports tool-calling through the bind_tools method. Here’s how to use it:

# Define your tools
tools = [
    {
        "name": "get_weather",
        "description": "Get the current weather in a location",
        "input_schema": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g., San Francisco, CA"
                }
            },
            "required": ["location"]
        }
    }
]

# Bind tools to the model
model_with_tools = model.bind_tools(tools)

# Use the model with tools
response = model_with_tools.invoke([
    HumanMessage(content="What's the weather like in San Francisco?")
])

Key Features of the Official Implementation

The official implementation in ChatAnthropic provides several advantages over the experimental version:

Tool Definition Flexibility

The bind_tools method accepts various tool definition formats:

tools (Sequence[dict[str, Any] | type | Callable | BaseTool])  A list of tool definitions to bind to this chat model.
Supports Anthropic format tool schemas and any tool definition handled
by convert_to_openai_tool().

This means you can define tools using Anthropic’s native format, LangChain’s BaseTool classes, or even Python functions and classes that can be converted to tool schemas.

Tool Choice Control

You can specify which tool(s) the model should use:

# Automatically select a tool (or no tool)
model_with_tools = model.bind_tools(tools, tool_choice="auto")

# Force the model to use a specific tool
model_with_tools = model.bind_tools(tools, tool_choice="get_weather")

# Force the model to use any tool
model_with_tools = model.bind_tools(tools, tool_choice="any")

Parallel Tool Calls

Claude 3 supports parallel tool calling, which the official implementation can leverage:

# Enable parallel tool calls (default)
model_with_tools = model.bind_tools(tools, parallel_tool_calls=True)

# Disable parallel tool calls
model_with_tools = model.bind_tools(tools, parallel_tool_calls=False)

Example: Migrating a Complete Application

Let’s walk through migrating a complete application from the experimental to the official implementation:

# Before migration
from langchain_anthropic.experimental import ChatAnthropicTools
from langchain.schema import HumanMessage, SystemMessage

# Initialize the model
model = ChatAnthropicTools(
    model="claude-3-opus-20240229",
    anthropic_api_key="your-api-key"
)

# Define tools
calculator_tool = {
    "name": "calculator",
    "description": "Evaluate mathematical expressions",
    "input_schema": {
        "type": "object",
        "properties": {
            "expression": {
                "type": "string",
                "description": "The mathematical expression to evaluate"
            }
        },
        "required": ["expression"]
    }
}

# Function to handle calculator tool calls
def handle_calculator(expression):
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {str(e)}"

# Conversation with tool use
messages = [
    SystemMessage(content="You are a helpful assistant that can solve math problems."),
    HumanMessage(content="What is 123 * 456?")
]

response = model.invoke(messages, tools=[calculator_tool])

# Process tool calls
if hasattr(response, "tool_calls") and response.tool_calls:
    for tool_call in response.tool_calls:
        if tool_call["name"] == "calculator":
            result = handle_calculator(tool_call["input"]["expression"])
            messages.append(response)
            messages.append(HumanMessage(content=f"Tool result: {result}"))
            final_response = model.invoke(messages)
            print(final_response.content)

After migration:

# After migration
from langchain_anthropic import ChatAnthropic
from langchain.schema import HumanMessage, SystemMessage
from langchain.tools import tool

# Initialize the model
model = ChatAnthropic(
    model="claude-3-opus-20240229",
    anthropic_api_key="your-api-key"
)

# Define tool using LangChain's @tool decorator
@tool
def calculator(expression: str) -> str:
    """Evaluate mathematical expressions."""
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {str(e)}"

# Bind the tool to the model
model_with_tools = model.bind_tools([calculator])

# Conversation with tool use
messages = [
    SystemMessage(content="You are a helpful assistant that can solve math problems."),
    HumanMessage(content="What is 123 * 456?")
]

# The tool calling and processing happens automatically
response = model_with_tools.invoke(messages)
print(response.content)

Understanding the Response Format

The response format from Claude’s official tool-calling API includes several key components:

# Example response structure
response = model_with_tools.invoke([HumanMessage(content="What's 123 * 456?")])

# Check if the model used tools
if hasattr(response, "tool_calls") and response.tool_calls:
    print("Tool calls:")
    for tool_call in response.tool_calls:
        print(f"  Tool: {tool_call['name']}")
        print(f"  Input: {tool_call['input']}")
        print(f"  Output: {tool_call['output']}")

# The final content includes the tool results incorporated into the response
print(response.content)

# You can also access metadata about token usage
if hasattr(response, "response_metadata") and response.response_metadata:
    print(f"Input tokens: {response.response_metadata['usage']['input_tokens']}")
    print(f"Output tokens: {response.response_metadata['usage']['output_tokens']}")

Token Counting with the Official Implementation

Token counting is an important consideration when working with LLMs. The official implementation provides methods for counting tokens:

# Count tokens in text
token_count = model.get_num_tokens("Hello, world!")
print(f"Token count: {token_count}")

# Count tokens in messages (including with tools)
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What's the weather like in San Francisco?")
]
token_count = model.get_num_tokens_from_messages(messages, tools=[weather_tool])
print(f"Message token count: {token_count}")

As noted in the documentation, the token counting implementation changed in version 0.3.0 to use Anthropic’s token counting API for more accurate results.

Structured Output with the Official Implementation

If you were using ChatAnthropicTools for structured outputs, the official implementation supports this through the with_structured_output method:

from pydantic import BaseModel, Field

# Define your output schema as a Pydantic model
class WeatherInfo(BaseModel):
    temperature: float = Field(description="Temperature in Celsius")
    condition: str = Field(description="Weather condition (e.g., sunny, rainy)")
    humidity: float = Field(description="Humidity percentage")

# Create a model with structured output
structured_model = model.with_structured_output(WeatherInfo)

# Use the model to get structured data
result = structured_model.invoke("What's the weather like in San Francisco?")
print(f"Temperature: {result.temperature}°C")
print(f"Condition: {result.condition}")
print(f"Humidity: {result.humidity}%")

Conclusion

Migrating from LangChain’s experimental ChatAnthropicTools to the official implementation is straightforward and brings several advantages, including better integration with Claude’s native capabilities, more flexible tool definitions, and improved token counting.

By following the steps outlined in this guide, you can ensure a smooth transition to the official tool-calling support in the Claude API. This migration not only future-proofs your code but also gives you access to the latest features and improvements in both Anthropic’s Claude and LangChain’s integration with it.

Remember that the experimental class will remain available until langchain-anthropic==1.0.0, giving you ample time to migrate your applications. However, it’s recommended to update your code sooner rather than later to take advantage of the official implementation’s features and ensure compatibility with future updates.

This post was originally written in my native language and then translated using an LLM. I apologize if there are any grammatical inconsistencies.

Categories:

Updated: