Documentation Index
Fetch the complete documentation index at: https://spacesail.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
This example demonstrates different methods of handling streaming responses from async agents, including manual iteration, print response, and pretty print response.
Code
import asyncio
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.utils.pprint import apprint_run_response
agent = Agent(
model=OpenAIChat(id="gpt-5-mini"),
)
async def streaming():
async for response in agent.arun(input="Tell me a joke.", stream=True):
print(response.content, end="", flush=True)
async def streaming_print():
await agent.aprint_response(input="Tell me a joke.", stream=True)
async def streaming_pprint():
await apprint_run_response(agent.arun(input="Tell me a joke.", stream=True))
if __name__ == "__main__":
asyncio.run(streaming())
# OR
asyncio.run(streaming_print())
# OR
asyncio.run(streaming_pprint())
Usage
Create a virtual environment
Open the Terminal and create a python virtual environment.python3 -m venv .venv
source .venv/bin/activate
Install libraries
pip install -U agno openai
Export your OpenAI API key
export OPENAI_API_KEY="your_openai_api_key_here"
Create a Python file
Create a Python file and add the above code. Find All Cookbooks
Explore all the available cookbooks in the Agno repository. Click the link below to view the code on GitHub:Agno Cookbooks on GitHub