Ollama+Qwen2,轻松搭建支持函数调用的聊天系统
liuian 2024-12-07 14:59 52 浏览
本文介绍如何通过Ollama结合Qwen2,搭建OpenAI格式的聊天API,并与外部函数结合来拓展模型的更多功能。
tools是OpenAI的Chat Completion API中的一个可选参数,可用于提供函数调用规范(function specifications)。这样做的目的是使模型能够生成符合所提供的规范的函数参数格式。同时,API 实际上不会执行任何函数调用。开发人员需要使用模型输出来执行函数调用。
Ollama支持OpenAI格式API的tool参数,在tool参数中,如果functions提供了参数,Qwen将会决定何时调用什么样的函数,不过Ollama目前还不支持强制使用特定函数的参数tool_choice。
注:本文测试用例参考OpenAI cookbook:https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
本文主要包含以下三个部分:
- 模型部署:使用Ollama和千问,通过设置template,部署支持Function call的聊天API接口。
- 生成函数参数:指定一组函数并使用 API 生成函数参数。
- 调用具有模型生成的参数的函数:通过实际执行具有模型生成的参数的函数来闭合循环。
01、模型部署
单模型文件下载
使用ModelScope命令行工具下载单个模型,本文使用Qwen2-7B的GGUF格式:
modelscope download --model=qwen/Qwen2-7B-Instruct-GGUF --local_dir . qwen2-7b-instruct-q5_k_m.ggufLinux环境使用
Liunx用户可使用魔搭镜像环境安装【推荐】
modelscope download --model=modelscope/ollama-linux --local_dir ./ollama-linux
cd ollama-linux
sudo chmod 777 ./ollama-modelscope-install.sh
./ollama-modelscope-install.sh启动Ollama服务
ollama serve创建ModelFile
复制模型路径,创建名为“ModelFile”的meta文件,其中设置template,使之支持function call,内容如下:
FROM /mnt/workspace/qwen2-7b-instruct-q5_k_m.gguf
# set the temperature to 0.7 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER repeat_penalty 1.05
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{ .System }}
{{- if .Tools }}
# Tools
You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>{{- range .Tools }}{{ .Function }}{{- end }}</tools>
For each function call, return a JSON object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>{{- end }}<|im_end|>{{- end }}
{{- range .Messages }}
{{- if eq .Role "user" }}
<|im_start|>{{ .Role }}
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}
<|im_start|>{{ .Role }}
{{- if .Content }}
{{ .Content }}
{{- end }}
{{- if .ToolCalls }}
<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}<|im_end|>
{{- else if eq .Role "tool" }}
<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- end }}
<|im_start|>assistant
{{ else }}{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}
"""创建自定义模型
使用ollama create命令创建自定义模型
ollama create myqwen2 --file ./ModelFile运行模型:
ollama run myqwen202、生成函数参数
安装依赖
!pip install scipy --quiet
!pip install tenacity --quiet
!pip install tiktoken --quiet
!pip install termcolor --quiet
!pip install openai --quiet使用OpenAI的API格式调用本地部署的qwen2模型
import json
import openai
from tenacity import retry, wait_random_exponential, stop_after_attempt
from termcolor import colored
MODEL = "myqwen2"
client = openai.OpenAI(
base_url="http://127.0.0.1:11434/v1",
api_key = "None"
)实用工具
首先,让我们定义一些实用工具,用于调用聊天完成 API 以及维护和跟踪对话状态。
@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None, tool_choice=None, model=MODEL):
try:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice=tool_choice,
)
return response
except Exception as e:
print("Unable to generate ChatCompletion response")
print(f"Exception: {e}")
return edef pretty_print_conversation(messages):
role_to_color = {
"system": "red",
"user": "green",
"assistant": "blue",
"function": "magenta",
}
for message in messages:
if message["role"] == "system":
print(colored(f"system: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "user":
print(colored(f"user: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "assistant" and message.get("function_call"):
print(colored(f"assistant: {message['function_call']}\n", role_to_color[message["role"]]))
elif message["role"] == "assistant" and not message.get("function_call"):
print(colored(f"assistant: {message['content']}\n", role_to_color[message["role"]]))
elif message["role"] == "function":
print(colored(f"function ({message['name']}): {message['content']}\n", role_to_color[message["role"]]))
基本概念(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts)
这里假设了一个天气 API,并设置了一些函数规范和它进行交互。将这些函数规范传递给 Chat API,以便模型可以生成符合规范的函数参数。
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
}
},
{
"type": "function",
"function": {
"name": "get_n_day_weather_forecast",
"description": "Get an N-day weather forecast",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
"num_days": {
"type": "integer",
"description": "The number of days to forecast",
}
},
"required": ["location", "format", "num_days"]
},
}
},
]如果我们向模型询问当前的天气情况,它将会反问,希望获取到进一步的更多的参数信息。
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "hi ,can you tell me what's the weather like today"})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='Of course, I can help with that. To provide accurate information, could you please specify the city and state you are interested in?', role='assistant', function_call=None, tool_calls=None)一旦我们通过对话提供缺失的参数信息,模型就会为我们生成适当的函数参数。
messages.append({"role": "user", "content": "I'm in Glasgow, Scotland."})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_qq8e5z9w', function=Function(arguments='{"location":"Glasgow, Scotland"}', name='get_current_weather'), type='function')])通过不同的提示词,我们可以让它反问不同的问题以获取函数参数信息。
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "can you tell me, what is the weather going to be like in Glasgow, Scotland in next x days"})
chat_response = chat_completion_request(
messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_messageChatCompletionMessage(content='Sure, I can help with that. Could you please specify how many days ahead you want to know the weather forecast for Glasgow, Scotland?', role='assistant', function_call=None, tool_calls=None)messages.append({"role": "user", "content": "5 days"})
chat_response = chat_completion_request(
messages, tools=tools
)
chat_response.choices[0]Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_b7f3j7im', function=Function(arguments='{"location":"Glasgow, Scotland","num_days":5}', name='get_n_day_weather_forecast'), type='function')]))并行函数调用
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#parallel-function-calling)
支持一次提问中,并行调用多次函数
messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "what is the weather going to be like in San Francisco and Glasgow over the next 4 days"})
chat_response = chat_completion_request(
messages, tools=tools, model=MODEL
)
assistant_message = chat_response.choices[0].message.tool_calls
assistant_message[ChatCompletionMessageToolCall(id='call_vei89rz3', function=Function(arguments='{"location":"San Francisco, CA","num_days":4}', name='get_n_day_weather_forecast'), type='function'),
ChatCompletionMessageToolCall(id='call_4lgoubee', function=Function(arguments='{"location":"Glasgow, UK","num_days":4}', name='get_n_day_weather_forecast'), type='function')]使用模型生成函数
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#how-to-call-functions-with-model-generated-arguments)
在这个示例中,演示如何执行输入由模型生成的函数,并使用它来实现可以为我们解答有关数据库的问题的代理。
本文使用Chinook 示例数据库(https://www.sqlitetutorial.net/sqlite-sample-database/)。
指定执行 SQL 查询的函数
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#specifying-a-function-to-execute-sql-queries)
首先,让我们定义一些有用的函数来从 SQLite 数据库中提取数据。
import sqlite3
conn = sqlite3.connect("data/Chinook.db")
print("Opened database successfully")def get_table_names(conn):
"""Return a list of table names."""
table_names = []
tables = conn.execute("SELECT name FROM sqlite_master WHERE type='table';")
for table in tables.fetchall():
table_names.append(table[0])
return table_names
def get_column_names(conn, table_name):
"""Return a list of column names."""
column_names = []
columns = conn.execute(f"PRAGMA table_info('{table_name}');").fetchall()
for col in columns:
column_names.append(col[1])
return column_names
def get_database_info(conn):
"""Return a list of dicts containing the table name and columns for each table in the database."""
table_dicts = []
for table_name in get_table_names(conn):
columns_names = get_column_names(conn, table_name)
table_dicts.append({"table_name": table_name, "column_names": columns_names})
return table_dicts现在可以使用这些实用函数来提取数据库模式的表示。
database_schema_dict = get_database_info(conn)
database_schema_string = "\n".join(
[
f"Table: {table['table_name']}\nColumns: {', '.join(table['column_names'])}"
for table in database_schema_dict
]
)与之前一样,我们将为希望 API 为其生成参数的函数定义一个函数规范。请注意,我们正在将数据库模式插入到函数规范中。这对于模型了解这一点很重要。
tools = [
{
"type": "function",
"function": {
"name": "ask_database",
"description": "Use this function to answer user questions about music. Input should be a fully formed SQL query.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": f"""
SQL query extracting info to answer the user's question.
SQL should be written using this database schema:
{database_schema_string}
The query should be returned in plain text, not in JSON.
""",
}
},
"required": ["query"],
},
}
}
]执行 SQL 查询
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#executing-sql-queries)
现在让我们实现实际执行数据库查询的函数。
def ask_database(conn, query):
"""Function to query SQLite database with a provided SQL query."""
try:
results = str(conn.execute(query).fetchall())
except Exception as e:
results = f"query failed with error: {e}"
return results使用 Chat Completions API 调用函数的步骤:
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#steps-to-invoke-a-function-call-using-chat-completions-api)
步骤 1:向模型提示可能导致模型选择要使用的工具的内容。工具的描述(例如函数名称和签名)在“工具”列表中定义,并在 API 调用中传递给模型。如果选择,函数名称和参数将包含在响应中。
步骤 2:通过编程检查模型是否想要调用函数。如果是,则继续执行步骤 3。
步骤 3:从响应中提取函数名称和参数,使用参数调用该函数。将结果附加到消息中。
步骤 4:使用消息列表调用聊天完成 API 以获取响应。
messages = [{
"role":"user",
"content": "What is the name of the album with the most tracks?"
}]
response = client.chat.completions.create(
model='myqwen2',
messages=messages,
tools= tools,
tool_choice="auto"
)
# Append the message to messages list
response_message = response.choices[0].message
messages.append(response_message)
print(response_message)ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_23nnhlv6', function=Function(arguments='{"query":"SELECT Album.Title FROM Album JOIN Track ON Album.AlbumId = Track.AlbumId GROUP BY Album.Title ORDER BY COUNT(*) DESC LIMIT 1"}', name='ask_database'), type='function')])# Step 2: determine if the response from the model includes a tool call.
tool_calls = response_message.tool_calls
if tool_calls:
# If true the model will return the name of the tool / function to call and the argument(s)
tool_call_id = tool_calls[0].id
tool_function_name = tool_calls[0].function.name
tool_query_string = json.loads(tool_calls[0].function.arguments)['query']
# Step 3: Call the function and retrieve results. Append the results to the messages list.
if tool_function_name == 'ask_database':
results = ask_database(conn, tool_query_string)
messages.append({
"role":"tool",
"tool_call_id":tool_call_id,
"name": tool_function_name,
"content":results
})
# Step 4: Invoke the chat completions API with the function response appended to the messages list
# Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'
model_response_with_function_call = client.chat.completions.create(
model="myqwen2",
messages=messages,
) # get a new response from the model where it can see the function response
print(model_response_with_function_call.choices[0].message.content)
else:
print(f"Error: function {tool_function_name} does not exist")
else:
# Model did not identify a function to call, result can be returned to the user
print(response_message.content) The album "Greatest Hits" contains the most tracks
欢迎点赞关注我,获取更多关于 AI 的前沿资讯。别忘了将今天的内容分享给你的朋友们,让我们一起见证 AI 技术的飞跃!学习商务交流
相关推荐
- 笔记本一直重启开不了机怎么办
-
那就是笔记本坏掉了呀,坏掉了就去售后维修看一下呀原因及解决方法如下1.我们需要确认一下笔记本电脑是否有电。如果笔记本电脑的电量不足,就会出现无法开机的情况。此时,我们需要将电脑连接到电源插座上,充电一...
- 电脑不能开机是怎么回事
-
电脑不能开机可能有多种原因,以下是一些常见的问题和解决方法:1.电源问题:检查电源插头是否插紧,是否有电源输出。可以尝试更换电源线或电源插头。2.内部硬件故障:电脑内部硬件出现问题可能导致电脑无法...
- itunesstore下载安装(itunes下载安装教程)
-
回答如下:是的,下载和安装应用程序需要使用iTunes帐户登录。这是为了确保您有权使用该应用程序,并且可以在需要时重新安装该应用程序。您可以使用现有的iTunes帐户或创建一个新的帐户来登录。App...
- windows软件中心(win10应用中心)
-
windows安全中心是windows系统的一个安全综合控制面板,包含有防火墙状态提示,杀毒软件状态提示,自动更新提示等系统基本安全信息。Windows安全中心的前身是MicrosoftSecu...
- 苹果动态壁纸怎么设置自己的视频
-
苹果图库里的视频做成动态壁纸方法如下:1/7点击Livephoto进入处理视频工具页面后,找到并点击里面的Livephoto选项。2/7选择视频在跳转的选择页面中,选择并点击一个要进行处理的视频。3/...
-
- 查看qq密码的软件(查询qq密码软件)
-
可以按照如下方式进行查看:1、打开QQ安全中心应用,使用当前已有的QQ账号登录,登陆成功之后,点击页面右下角“工具箱”图标;2、接下来,在打开的页面中,点击“修改密码”菜单;3、最后,在打开的修改密码页面中,在网页中直接输入新的密码,点击“...
-
2026-01-08 02:05 liuian
- u盘管理软件手机版(手机u盘管理软件哪个好用)
-
安卓手机里的U盘管理软件一般就是在桌面上大家经常看到的那个文件管理器图标,是一个黄色文件夹的图标。只要手机插入U盘并且读取成功的情情况下,打开文件管理器里面可以看到U盘的内容,并且可以在里面进行手机和...
- pe启动盘镜像文件下载(pe启动盘镜像iso下载)
-
光盘映像文件可以用系统u盘安装,方法如下:1、插入U盘打开运行U盘启动盘制作工具,切换到“U盘启动-ISO模式”。2、首先生成点击“生成ISO镜像文件”,“选择高级版PE”,生成完成后点击制作ISO启...
- windows如何截屏快捷键(windows怎么快捷键截屏)
-
1、系统自带截屏:按下键盘的“Windows+shift+S”即可启动系统的截屏功能;2、微信截屏:首先,启动电脑微信;然后按下快捷键“Alt+A”来截取屏幕;3、QQ截屏:打开电脑QQ,同时按下“C...
- 无线路由器桥接图解(无线路由器桥接示意图)
-
第一步:设置主路由器,保证任意一台设备连接主路由器能正常上网,然后进入主路由器的设置界面,然后在左侧选项条中点击「运行状态」,在WAN口状态栏可以找到DNS服务器,一共有两个,主DNS和备选DNS服务...
- win10快速助手老是弹出来(win10快速助手老是弹出来网页)
-
答:win10快捷助手可按下列步骤关闭:第一步:使用Windows+S快捷键打开Cortana界面,选择左侧的齿轮图标。第二步:将“Cortana可以提供建议、想法、提醒、通知等。”选项关闭。并将下面...
- win10频繁默认网关不可用(windows10默认网关不可用)
-
如果Windows10电脑的默认网关不可用,可以尝试以下解决方法:重启路由器和电脑:断开电源,然后再重新启动路由器和电脑。更新网络驱动程序:右键单击任务栏中的网络图标,选择“控制面板”,再打开“...
- 金山卫士app(金山卫士是免费的吗)
-
金山卫士是一款由金山网络技术有限公司开发的免费安全软件。它以简单易用的界面和多种功能,如木马查杀、漏洞修复、垃圾清理等,帮助用户保护电脑安全,优化系统性能。一款杀毒软件,可以让电脑更加安全金山卫士和金...
- 英伟达显卡驱动下载位置(英伟达显卡驱动下载路径)
-
nvidia显卡官网下载驱动到哪个磁盘都行,只有有足够的空间就行。下载完安装之后你可以把显卡的安装包删了不然会占用你一个g的储存空间。显卡驱动越新越好,因为越新的显卡驱动就意味着越好的优化和越高的性能...
- 一周热门
-
-
飞牛OS入门安装遇到问题,如何解决?
-
如何在 iPhone 和 Android 上恢复已删除的抖音消息
-
Boost高性能并发无锁队列指南:boost::lockfree::queue
-
大模型手册: 保姆级用CherryStudio知识库
-
用什么工具在Win中查看8G大的log文件?
-
如何在 Windows 10 或 11 上通过命令行安装 Node.js 和 NPM
-
威联通NAS安装阿里云盘WebDAV服务并添加到Infuse
-
Trae IDE 如何与 GitHub 无缝对接?
-
idea插件之maven search(工欲善其事,必先利其器)
-
如何修改图片拍摄日期?快速修改图片拍摄日期的6种方法
-
- 最近发表
- 标签列表
-
- python判断字典是否为空 (50)
- crontab每周一执行 (48)
- aes和des区别 (43)
- bash脚本和shell脚本的区别 (35)
- canvas库 (33)
- dataframe筛选满足条件的行 (35)
- gitlab日志 (33)
- lua xpcall (36)
- blob转json (33)
- python判断是否在列表中 (34)
- python html转pdf (36)
- 安装指定版本npm (37)
- idea搜索jar包内容 (33)
- css鼠标悬停出现隐藏的文字 (34)
- linux nacos启动命令 (33)
- gitlab 日志 (36)
- adb pull (37)
- python判断元素在不在列表里 (34)
- python 字典删除元素 (34)
- vscode切换git分支 (35)
- python bytes转16进制 (35)
- grep前后几行 (34)
- hashmap转list (35)
- c++ 字符串查找 (35)
- mysql刷新权限 (34)
