Skip to content

Latest commit

 

History

History
43 lines (24 loc) · 2.12 KB

File metadata and controls

43 lines (24 loc) · 2.12 KB

What this sample demonstrates

An Agent Framework agent hosted using the Responses protocol.

How It Works

Model Integration

The agent uses FoundryChatClient from the Agent Framework to create a Responses client from the project endpoint and model deployment. The agent supports both streaming (SSE events) and non-streaming (JSON) response modes.

See main.py for the full implementation.

Agent Hosting

The agent is hosted using the Agent Framework with the ResponsesHostServer, which provisions a REST API endpoint compatible with the OpenAI Responses protocol.

Running the Agent Host

Follow the instructions in the Running the Agent Host Locally section of the README in the parent directory to run the agent host.

Interacting with the agent

Depending on how you run the agent host, you can invoke the agent using curl (Invoke-WebRequest in PowerShell) or azd. Please refer to the parent README for more details. Use this README for sample queries you can send to the agent.

Send a POST request to the server with a JSON body containing an "input" field to interact with the agent. For example:

curl -X POST http://localhost:8088/responses -H "Content-Type: application/json" -d '{"input": "Hi"}'

The server will respond with a JSON object containing the response text and a response ID. You can use this response ID to continue the conversation in subsequent requests.

Multi-turn conversation

To have a multi-turn conversation with the agent, include the previous response id in the request body. For example:

curl -X POST http://localhost:8088/responses -H "Content-Type: application/json" -d '{"input": "How are you?", "previous_response_id": "REPLACE_WITH_PREVIOUS_RESPONSE_ID"}'

Deploying the Agent to Foundry

To host the agent on Foundry, follow the instructions in the Deploying the Agent to Foundry section of the README in the parent directory.