Open Knowledge: AI

Functions with OpenAI Assistant API

This tutorial explains Assistant API to call functions — such as integrating third party APIs.

Teemu Maatta
6 min readNov 23, 2023
Photo by devn on Unsplash

Introduction

Assistant API is used to build intelligent AI assistants with access to tools. These tools include:

  • Code Interpreter (OpenAI),
  • Knowledge Interpreter (OpenAI),
  • Functions (Third-party tools)

In this tutorial, I will explain in practice the usage of the Function calling with Assistant API.

We willl build an Assistant, which uses function calling to parse parameters required for a third party API from the user input. It then performs API call with the parameters and uses its response, to finally answer back to the user.

Let’s get started.

Functions

Our use case consists of user, who wants to save every input into a long-term memory file in .json-format and get a response once the .json-file is updated.

Function calling is useful to retrieve user inputs as format required by third party APIs or building Autonomous Agents.

I have written this tutorial in a way, that you can execute the function locally, without need to pose access for a third party API-endpoint.

I will start by importing the libraries:

!pip install --upgrade openai
import os
import json
import time
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("openaikey"))

I can now define few pre-defined variables, dictionaries and prompts used in this tutorial. This way, you can later change them for your use case.

user_input = f'I am excited at home thinking about the way to use Functions to generate new types of fully audio controlled app.'
filename = f'memory.json'
function_name = f'add_memory'
function_description =f'Function retrieves user input and saves file using the add_memory function.'
assistant_instruction = r'Convert user (Jack) inputs to json format with function called {}. Fill all variables and use code interpreter to define exact time at the moment by calling the time package and retrieve current date and time to populate timestamp. Wait for the function execution and inform user about the retrieved message from the functon executed.'.format(function_name, filename)
model_name = f'gpt-4-1106-preview'
retrieval_tool = {"type": "retrieval"}
code_interpreter_tool = {"type": "code_interpreter"}
function_tool = {
"type": "function",
"function": {
"name": function_name,
"description": function_description,
"parameters": {
"type": "object",
"properties": {
"time_stamp": {"type": "string", "description": "The exact time of the user input, for example 14.11.2023 - 18.45"},
"subject": {"type": "string", "description": "Describe the subject with 1-2 words, for example Dentist appointment"},
"description": {"type": "string", "description": "Short summary describing relevant information, for example: Tony reserved dentist. There were no free appointments until end of November 2023."},
"people": {"type": "string", "description": "Add optionally names or characters of people separated with a comma. For example: Tony, Dentist"},
"feelings": {"type": "string", "description": "Express optionally emotions separated with a comma, for example: frustrated"},
},
"required": ["timestamp", "subject", "description", "people", "feelings"]
}
}
}
assistant_tools = [retrieval_tool, code_interpreter_tool, function_tool]

I define next the function, which parameters I want the GPT-4 pull for me. This particular function is meant to save memories of the user. These memories include timestamp, subject etc.

If I execute this function locally, it will add the parameters to a local .json-file. Secondly, the function outputs different string, based on in case it managed or not to save the file.

def add_memory(timestamp, subject, description, people=None, feelings=None):
memory_entry = {
'timestamp': timestamp,
'subject': subject,
'description': description,
'people': people if people else "Not available",
'feelings': feelings if feelings else "Not available"
}

try:
with open(filename, 'r') as file:
data = json.load(file)
except (FileNotFoundError, json.JSONDecodeError):
data = {}

order_number = len(data) + 1

data[str(order_number)] = memory_entry

try:
with open(filename, 'w') as file:
json.dump(data, file, indent=10)
function_output = r"The new memory records were inserted succesfully."
except Exception as e:
function_output = r"The new memory record generation failed."

return function_output

I can now create the assistant using the OpenAI API.

assistant = client.beta.assistants.create(
instructions= assistant_instruction,
model=model_name,
tools= assistant_tools
)

Once the Assistant is created, I can create a thread. I will add the user message to the thread.

thread = client.beta.threads.create()

message = client.beta.threads.messages.create(
thread_id=thread.id,
role="user",
content=user_input)

I add the assistant message inside the thread.

run = client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant.id)
time.sleep(9.5)
print(run.status)

I can check now the status of the response. These response statuses include:

  • queue (starting status),
  • in_progress (quickly converts into this status),
  • requires_action (used with function callling) and
  • completed (the response is available).

I will then retrieve the response. In case the response is not yet available, it will be either in queue or in_progress, but it soon converts into status: requires_action.

run = client.beta.threads.runs.retrieve(
thread_id=thread.id,
run_id=run.id)
time.sleep(2.5)
print(run.status)
Assistant API. Status message: requires_action. Image by Author

The Assistant API has now generated the variables from the user input, which my function will need.

So, I can obtain the function name and the parameters for them:

tool_call_id = run.required_action.submit_tool_outputs.tool_calls[0]
function_name = run.required_action.submit_tool_outputs.tool_calls[0].function.name
function_arguments = json.loads(run.required_action.submit_tool_outputs.tool_calls[0].function.arguments)
print(function_name + "\n" + function_arguments["time_stamp"] + "\n" +function_arguments["subject"] + "\n" +function_arguments["description"] + "\n" +function_arguments["people"] + "\n" + function_arguments["feelings"])

This will output the following message:

OpenAI Assistant API tools: Functions. Image by the Author.

I will next execute this function locally using these variablaes provided by the Assistant API:

function_response = globals()[function_name](function_arguments["time_stamp"], function_arguments["subject"], function_arguments["description"], function_arguments["people"], function_arguments["feelings"])
print(function_response)
Executing function locally. Image by Author.

The executed function has now updated the .json memory-file:

{
"1": {
"timestamp": "12.11.2023 - 19.13",
"subject": "Innovation Idea",
"description": "While watching a football match of Real Madrid, the user had an innovative idea to create an App that automatically generates insights using the Assistant API.",
"people": "Jack",
"feelings": "Innovative, Thoughtful"
},
"2": {
"timestamp": "12.11.2023 - 19.18",
"subject": "Audio controlled app",
"description": "Jack is at home and curious, thinking about how to use Functions to generate new types of fully audio controlled applications.",
"people": "Jack",
"feelings": "excited"
}
}

The next step is interesting!

The Assistant API is waiting me to send back the function output. Therefore, I can send this response back to the Assistant.

run = client.beta.threads.runs.submit_tool_outputs(
thread_id=thread.id,
run_id=run.id,
tool_outputs=[
{
"tool_call_id": tool_call_id.id,
"output": function_response,
}
],
)
print(run.status)

Again, once the status becomes completed, I can retrieve the Assistant message.

run = client.beta.threads.runs.retrieve(
thread_id=thread.id,
run_id=run.id)
time.sleep(1)
print(run.status)
Status message with Assistant API. Image by Author

I can then print the Assistant message:

messages = client.beta.threads.messages.list(
thread_id=thread.id)
print(messages.data[0].content[0].text.value)

As we can see, the Assistant confirms back to user, if the file was locally successful saved.

Function calling with OpenAI Assistant API — Assistant response back to user. Image by Author.

I can finally delete the Assistant.

client.beta.assistants.delete(assistant.id)

Conclusions

In this tutorial, we have built an Assistant taking advantage of Function calling. Let’s repeat the steps:

  • User inserts message
  • Assistant retrieves parameters from the user message for the funtion
  • Function is executed locally / API
  • Function response is sent back to the Assistant
  • Assistant responds to user given the function response.

Overall, we can build very complex workflows using these functions.

This article is in the series: Open Knowledge: AI.

The aim is to share +10% of my articles for free to everybody without the Medium paywall.

References

[1] LLMs. Github. Teemu Maatta. https://github.com/tmgthb/LLMs

--

--

Teemu Maatta
Teemu Maatta

Written by Teemu Maatta

Author (+200k views) in Artificial General Intelligence. Autonomous Agents. Robotics. Madrid.