LLMs Function Calling
How to make so that LLMs are using our already built tools?
Function Calling
With OpenAI
Thanks to https://www.promptingguide.ai/applications/function_calling
Testing it together with chainlit
How to use OpenAI API?
pip install openai==1.40.0
api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=api_key)
#df = read_excel(file_name)
#df_markdown = df.to_markdown(index=False)
df_markdown="12345"
chat_completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": """
You are an expert data analyst.
""",
},
{"role": "user", "content": f"What it is this variable containing?: {df_markdown}"}
],
model="gpt-4o-mini",
temperature=0.3,
)
completed_message = chat_completion.choices[0].message.content
print(completed_message)
Interesting Resources for Function Calling
ChatGPT returns natural text, and it can be unreliable. Returning functions makes the output more controlled and deterministic.
The feature can extract structured data from text (prompt) and assign them as arguments to a chosen function.
Developers can create their own functions connecting the LLMs to internal and external APIs and databases, and let the model decides which function to use and which arguments to pass.
Non-technical users can interact with LLMs to obtain data without having to know the underlying functions and required arguments.
Claude
How to use Anthropic API?
pip install anthropic==0.34.1 #https://github.com/anthropics/anthropic-sdk-python
import anthropic
client = Anthropic(api_key = os.getenv("ANTHROPIC_API_KEY"),)
system_prompt = "You are a helpful Data Analyst."
message = client.messages.create(
max_tokens=1024,
system=system_prompt, # Use the top-level "system" parameter
messages=[
{"role": "user", "content": "Hello, who you?"}
],
model="claude-3-5-sonnet-20240620",
#model="claude-3-opus-20240229",
)
#print(message.content)
content = message.content[0].text
print(content)
Groq
Groq function calling via LiteLLM - https://docs.litellm.ai/docs/providers/groq#supported-models---all-groq-models-supported
Ollama
Conclusions
There is another feature called Structured Outputs
..but we can see its all about the prompts!
Example of system Prompt that works well for me
#python3 pyopen.py > output.mdx
import os
from dotenv import load_dotenv
from openai import OpenAI # pip install openai==1.30.5
# Load environment variables from the .env file
load_dotenv()
# Get the OpenAI API key from the environment variables
api_key = os.getenv("OPENAI_API_KEY")
# Initialize the OpenAI client
client = OpenAI(
api_key=api_key,
)
mtg_must_have= """
* Purpose - Every meeting should have an objective—the reason why you're having the meeting. Before you schedule a meeting be sure you know what it is that you want out of the meeting.
* Agenda - An agenda outlines the plan for the meeting and lets participants know what to expect. It allows them to be prepared so they can actively participate and bring their expertise to the table.
* preparation - Before the meeting all participants should take some time to review the agenda and prepare any questions they may have.
"""
system_reminder = """
Never start your answers with "As an AI language model" when responding to questions.
No disclaimer is needed in any situation.
Write using simple language so a grandma could understand.
Use a conversational tone, be informal and approachable. Write like how people in real life would talk. A bit of slang is okay but don't overdo it.
If you don’t know the answer, just say you don’t know.
Your answers should be on point, succinct and useful. Each response should be written with maximum usefulness in mind rather than being polite.
If something seems out of reach, don’t ask the user to do it; instead, try to work through all your available options first.
When solving problems, take a breath and tackle them step by step.
My career depends on you giving me a good answer
Speak only what needs to be said. Keep the responses brief and to the point, avoid extra words and overly long explanations.
"""
chat_completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": f"""You are an expert meeting assistant. Very aware of the following:
{mtg_must_have}
Remember also, that: {system_reminder}
""",
},
{"role": "user", "content": "Who are you and what can you do?"}
],
model="gpt-4o-mini",
temperature=0.7,
)
# Extract and print the content of the completed message
completed_message = chat_completion.choices[0].message.content
print(completed_message)
FAQ
About LangChain and Frameworks
Interesting Article - https://www.octomind.dev/blog/why-we-no-longer-use-langchain-for-building-our-ai-agents
However, as their requirements became more sophisticated, LangChain’s rigid high-level abstractions turned into a source of friction and hindered productivity.
Issues with LangChain’s abstractions were…
- The Main Issues with LangChain’s Abstractions
- 🚧 Increased complexity of code without perceivable benefits
- 🤔 Difficulty in understanding and maintaining code
- 🔒 Inflexibility in adapting to new requirements
- 🕸️ Nested abstractions leading to debugging internal framework code
Octomind’s development team faced challenges when trying to implement more complex architectures, such as spawning sub-agents or having multiple specialist agents interact with each other. LangChain’s limitations forced them to reduce the scope of their implementations.
- Building AI Applications Without a Framework
After removing LangChain, Octomind realized that a framework might not be necessary for building AI applications. Instead, they suggest using a building blocks approach with simple low-level code and carefully selected external packages. The core components most applications need are:
- 💬 A client for LLM communication
- 🛠️ Functions/Tools for function calling
- 📊 A vector database for RAG
- 🔍 An Observability platform for tracing, evaluation, etc.
By using modular building blocks with minimal abstractions, Octomind’s team can now develop more quickly and with less friction, focusing on solving problems rather than translating ideas into framework-specific code.
Dont marry the framework? :)
Generating Images with OpenAI
You can use Dalle Text2Image models via the openAI API
Understanding Images with Claude
ReAct (vs) Function Calling
🛠️ Function Calling Agents
Function calling agents rely on the vendor to select the correct tools and inputs based on a provided schema, shifting the responsibility of tool selection to the vendor.
This approach is similar to the serverless model and is supported by many vendors, with LangChain providing an abstraction for easy switching between models.
🔧 Vendor-driven tool selection 🌐 Serverless-like model 🔄 Easy switching between models with LangChain abstraction 🧠 ReACt Agents
Now let’s see ReACt agents use the ReACt prompter, which is based on the ReACt paper and incorporates prompt engineering techniques.
This approach makes the LLM a reasoning engine, selecting tools and inputs itself. ReACt agents offer more control and flexibility to developers but require more work and thinking in the tool selection process.
Key points:
🎨 Developer-driven tool selection 📜 Based on the ReACt paper 🔧 Incorporates prompt engineering techniques 🔍 LLM acts as a reasoning engine 🛠️ More control and flexibility for developers 🤔 Requires more work and thinking from developers 🤔 Choosing the Right Approach
The choice between function calling agents and ReACt agents depends on the level of control and flexibility desired by the developer.
Function calling agents provide ease of use but less control, while ReACt agents offer more control but require more effort from the developer.
LangChain - AgentsExecutors
AI Agents
What could ReAct enhance? Two projects in mind 🚀
- Streamlit Multichat and the YT Summarizer with Groq from PhiData
version: '3'
services:
streamlit_multichat:
image: ghcr.io/jalcocert/streamlit-multichat:latest
container_name: streamlit_multichat
volumes:
- ai_streamlit_multichat:/app
working_dir: /app
command: /bin/sh -c "\
mkdir -p /app/.streamlit && \
echo 'OPENAI_API_KEY = \"sk-proj-oaiapi\"' > /app/.streamlit/secrets.toml && \
echo 'GROQ_API_KEY = \"gsk_yourgroqapi\"' >> /app/.streamlit/secrets.toml && \
echo 'ANTHROPIC_API_KEY = \"sk-ant-anthapikey-\"' >> /app/.streamlit/secrets.toml && \
streamlit run Z_multichat_Auth.py"
ports:
- "8501:8501"
networks:
- cloudflare_tunnel
restart: always
# - nginx_default
networks:
cloudflare_tunnel:
external: true
# nginx_default:
# external: true
volumes:
ai_streamlit_multichat:
version: '3.8'
services:
phidata_service:
image: ghcr.io/jalcocert/phidata:yt-groq #phidata:yt_summary_groq
container_name: phidata_yt_groq
ports:
- "8502:8501"
environment:
- GROQ_API_KEY=gsk_yourgroq_apikey # your_api_key_here
command: streamlit run cookbook/llms/groq/video_summary/app.py
restart: always
# networks:
# - cloudflare_tunnel
# networks:
# cloudflare_tunnel:
# external: true