TengineAI

Python Quick Start

Connect to TengineAI programmatically using the Anthropic Python SDK.

Note: TengineAI is accessed through MCP-compatible model SDKs. This guide uses the Anthropic Python SDK as a concrete example. No TengineAI-specific SDK is required.


Prerequisites

  • Python 3.8+
  • An Anthropic API key
  • A TengineAI project with at least one tool enabled

Installation

pip install anthropic>=0.25.0

Connect to TengineAI

from anthropic import Anthropic

client = Anthropic(
    api_key="YOUR_ANTHROPIC_API_KEY",
    base_url="https://app.tengine.ai/mcp"
)

response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "What are the trending posts in r/technology?"
        }
    ],
    tools="auto"
)

print(response.content)

Note: Any MCP-compatible Claude model can be used.


What Actually Happened

This code executed a multi-step flow:

  1. Client connected to TengineAI
    The base_url parameter routes requests to TengineAI's MCP server instead of Anthropic's default API.

  2. TengineAI returned the project's enabled tools as MCP tool definitions
    Based on your project configuration, TengineAI sent a list of tools the model can use (e.g., reddit-get_trending).

  3. Claude selected a tool
    The model analyzed the user prompt and decided reddit-get_trending was the correct tool to call.

  4. TengineAI authenticated and executed
    TengineAI used stored OAuth credentials to call Reddit's API on behalf of the model.

  5. Results were returned to Claude
    Reddit's response was formatted and sent back to the model as tool output.

  6. Claude generated the final response
    The model synthesized the tool output into a human-readable answer.

At no point did the client application execute API calls directly. The model never saw Reddit credentials. Authentication, execution, and result formatting happened server-side.


Tool Discovery

List available tools for a project:

tools_response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "List the tools available for this project."
        }
    ],
    tools="auto"
)

print(tools_response.content)

Claude will enumerate all tools TengineAI exposed for this project.


Handling Tool Execution

If a tool requires user-specific OAuth (e.g., Gmail), the first request will return an authorization URL:

response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": "Send an email to hello@example.com"
        }
    ],
    tools="auto"
)

# If OAuth is required, response will include authorization instructions
print(response.content)

OAuth authorization is completed in the TengineAI dashboard, not in the client. Follow the OAuth flow in your TengineAI dashboard, then retry the request.


Error Handling

TengineAI returns standard MCP error codes:

try:
    response = client.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[
            {
                "role": "user",
                "content": "Fetch my calendar events"
            }
        ],
        tools="auto"
    )
    print(response.content)
except Exception as e:
    print(f"Error: {e}")

Common errors:

  • 401 Unauthorized: Invalid API key
  • 403 Forbidden: Tool not enabled for this project
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: Tool execution failed

Environment Variables

For production, use environment variables:

import os
from anthropic import Anthropic

client = Anthropic(
    api_key=os.environ.get("ANTHROPIC_API_KEY"),
    base_url="https://app.tengine.ai/mcp"
)

Next Steps