OpenAI recently released its open-source Agents SDK. The documentation looked interesting, so I decided to give it a try.
The SDK supports multiple agents working together using “handoffs”. The example I am using in today’s article involves 3 agents:
1) Agent who specializes in answering questions on Planetary positions
2) Agent who handles everything else
3) Agent who routes incoming queries between the above two
I am using the “kerykeion” Python library for calculating Planetary positions. In this context, I am taking advantage of the SDK’s support for “output_type”, basically a “structured output”. See the following code fragment:
In the above, the class “EphContext”, derived from Pydantic’s “BaseModel”, defines the structure of the output I need in response to any user question on Planet position. I then call the function “get_ephemeris()” and get the planet position using the passed context.
The three agents I mentioned earlier are shown below:
The “eph_agent” is the Agent that handles queries on Planet position and bundles the given details into an “EphContext” object as specified in the “output_type” parameter.
The “main_agent” acts as the orchestrating agent, receiving all queries, and then “handing off” the task to either “other_tasks_agent” or “eph_agent”.
The program runs in a loop, reading user input and then responding accordingly. Here is the code:
Here is a sample output:
Overall, the SDK is quite functional. I would have liked it if, in addition to the “output_type” parameter passed to the “eph_agent”, there was an option to also pass my “get_ephemeris()” function as a “tool” so that the LLM would automatically pass the synthesized structured output to this function and invoke it. Unfortunately, that is not supported. That is why I have to call the function explicitly in my “main()” after checking the return type.
That is it for today. Have a great week!
You can download the code here.
Recent Comments