百度360必应搜狗淘宝本站头条
当前位置:网站首页 > IT知识 > 正文

Ollama+Qwen2,轻松搭建支持函数调用的聊天系统

liuian 2024-12-07 14:59 41 浏览

本文介绍如何通过Ollama结合Qwen2,搭建OpenAI格式的聊天API,并与外部函数结合来拓展模型的更多功能。


tools是OpenAI的Chat Completion API中的一个可选参数,可用于提供函数调用规范(function specifications)。这样做的目的是使模型能够生成符合所提供的规范的函数参数格式。同时,API 实际上不会执行任何函数调用。开发人员需要使用模型输出来执行函数调用。


Ollama支持OpenAI格式API的tool参数,在tool参数中,如果functions提供了参数,Qwen将会决定何时调用什么样的函数,不过Ollama目前还不支持强制使用特定函数的参数tool_choice。


注:本文测试用例参考OpenAI cookbook:https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models


本文主要包含以下三个部分:

  • 模型部署:使用Ollama和千问,通过设置template,部署支持Function call的聊天API接口。
  • 生成函数参数:指定一组函数并使用 API 生成函数参数。
  • 调用具有模型生成的参数的函数:通过实际执行具有模型生成的参数的函数来闭合循环。


01、模型部署

单模型文件下载

使用ModelScope命令行工具下载单个模型,本文使用Qwen2-7B的GGUF格式:

modelscope download --model=qwen/Qwen2-7B-Instruct-GGUF --local_dir . qwen2-7b-instruct-q5_k_m.gguf

Linux环境使用

Liunx用户可使用魔搭镜像环境安装【推荐】

modelscope download --model=modelscope/ollama-linux --local_dir ./ollama-linux
cd ollama-linux
sudo chmod 777 ./ollama-modelscope-install.sh
./ollama-modelscope-install.sh


启动Ollama服务

ollama serve


创建ModelFile

复制模型路径,创建名为“ModelFile”的meta文件,其中设置template,使之支持function call,内容如下:

FROM /mnt/workspace/qwen2-7b-instruct-q5_k_m.gguf


# set the temperature to 0.7 [higher is more creative, lower is more coherent]
PARAMETER temperature 0.7
PARAMETER top_p 0.8
PARAMETER repeat_penalty 1.05
TEMPLATE """{{ if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{ .System }}
{{- if .Tools }}


# Tools


You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools:
<tools>{{- range .Tools }}{{ .Function }}{{- end }}</tools>


For each function call, return a JSON object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>{{- end }}<|im_end|>{{- end }}
{{- range .Messages }}
{{- if eq .Role "user" }}
<|im_start|>{{ .Role }}
{{ .Content }}<|im_end|>
{{- else if eq .Role "assistant" }}
<|im_start|>{{ .Role }}
{{- if .Content }}
{{ .Content }}
{{- end }}
{{- if .ToolCalls }}
<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}<|im_end|>
{{- else if eq .Role "tool" }}
<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{- end }}
{{- end }}
<|im_start|>assistant
{{ else }}{{ if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}
"""


创建自定义模型

使用ollama create命令创建自定义模型

ollama create myqwen2 --file ./ModelFile


运行模型:

ollama run myqwen2


02、生成函数参数

安装依赖

!pip install scipy --quiet
!pip install tenacity --quiet
!pip install tiktoken --quiet
!pip install termcolor --quiet
!pip install openai --quiet


使用OpenAI的API格式调用本地部署的qwen2模型

import json
import openai
from tenacity import retry, wait_random_exponential, stop_after_attempt
from termcolor import colored  


MODEL = "myqwen2"
client = openai.OpenAI(
    base_url="http://127.0.0.1:11434/v1",
    api_key = "None"
)


实用工具

首先,让我们定义一些实用工具,用于调用聊天完成 API 以及维护和跟踪对话状态。

@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None, tool_choice=None, model=MODEL):
    try:
        response = client.chat.completions.create(
            model=model,
            messages=messages,
            tools=tools,
            tool_choice=tool_choice,
        )
        return response
    except Exception as e:
        print("Unable to generate ChatCompletion response")
        print(f"Exception: {e}")
        return e
def pretty_print_conversation(messages):
    role_to_color = {
        "system": "red",
        "user": "green",
        "assistant": "blue",
        "function": "magenta",
    }


    for message in messages:
        if message["role"] == "system":
            print(colored(f"system: {message['content']}\n", role_to_color[message["role"]]))
        elif message["role"] == "user":
            print(colored(f"user: {message['content']}\n", role_to_color[message["role"]]))
        elif message["role"] == "assistant" and message.get("function_call"):
            print(colored(f"assistant: {message['function_call']}\n", role_to_color[message["role"]]))
        elif message["role"] == "assistant" and not message.get("function_call"):
            print(colored(f"assistant: {message['content']}\n", role_to_color[message["role"]]))
        elif message["role"] == "function":
            print(colored(f"function ({message['name']}): {message['content']}\n", role_to_color[message["role"]]))


基本概念
(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#basic-concepts)

这里假设了一个天气 API,并设置了一些函数规范和它进行交互。将这些函数规范传递给 Chat API,以便模型可以生成符合规范的函数参数。

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the users location.",
                    },
                },
                "required": ["location", "format"],
            },
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_n_day_weather_forecast",
            "description": "Get an N-day weather forecast",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA",
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use. Infer this from the users location.",
                    },
                    "num_days": {
                        "type": "integer",
                        "description": "The number of days to forecast",
                    }
                },
                "required": ["location", "format", "num_days"]
            },
        }
    },
]


如果我们向模型询问当前的天气情况,它将会反问,希望获取到进一步的更多的参数信息。

messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "hi ,can you tell me what's the weather like today"})
chat_response = chat_completion_request(
    messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_message
ChatCompletionMessage(content='Of course, I can help with that. To provide accurate information, could you please specify the city and state you are interested in?', role='assistant', function_call=None, tool_calls=None)


一旦我们通过对话提供缺失的参数信息,模型就会为我们生成适当的函数参数。

messages.append({"role": "user", "content": "I'm in Glasgow, Scotland."})
chat_response = chat_completion_request(
    messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_message
ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_qq8e5z9w', function=Function(arguments='{"location":"Glasgow, Scotland"}', name='get_current_weather'), type='function')])


通过不同的提示词,我们可以让它反问不同的问题以获取函数参数信息。

messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "can you tell me, what is the weather going to be like in Glasgow, Scotland in next x days"})
chat_response = chat_completion_request(
    messages, tools=tools
)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_message
ChatCompletionMessage(content='Sure, I can help with that. Could you please specify how many days ahead you want to know the weather forecast for Glasgow, Scotland?', role='assistant', function_call=None, tool_calls=None)
messages.append({"role": "user", "content": "5 days"})
chat_response = chat_completion_request(
    messages, tools=tools
)
chat_response.choices[0]
Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_b7f3j7im', function=Function(arguments='{"location":"Glasgow, Scotland","num_days":5}', name='get_n_day_weather_forecast'), type='function')]))


并行函数调用

(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#parallel-function-calling)

支持一次提问中,并行调用多次函数

messages = []
messages.append({"role": "system", "content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."})
messages.append({"role": "user", "content": "what is the weather going to be like in San Francisco and Glasgow over the next 4 days"})
chat_response = chat_completion_request(
    messages, tools=tools, model=MODEL
)


assistant_message = chat_response.choices[0].message.tool_calls
assistant_message
[ChatCompletionMessageToolCall(id='call_vei89rz3', function=Function(arguments='{"location":"San Francisco, CA","num_days":4}', name='get_n_day_weather_forecast'), type='function'),
ChatCompletionMessageToolCall(id='call_4lgoubee', function=Function(arguments='{"location":"Glasgow, UK","num_days":4}', name='get_n_day_weather_forecast'), type='function')]


使用模型生成函数

(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#how-to-call-functions-with-model-generated-arguments)

在这个示例中,演示如何执行输入由模型生成的函数,并使用它来实现可以为我们解答有关数据库的问题的代理。

本文使用Chinook 示例数据库(https://www.sqlitetutorial.net/sqlite-sample-database/)。


指定执行 SQL 查询的函数

(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#specifying-a-function-to-execute-sql-queries)

首先,让我们定义一些有用的函数来从 SQLite 数据库中提取数据。

import sqlite3


conn = sqlite3.connect("data/Chinook.db")
print("Opened database successfully")
def get_table_names(conn):
    """Return a list of table names."""
    table_names = []
    tables = conn.execute("SELECT name FROM sqlite_master WHERE type='table';")
    for table in tables.fetchall():
        table_names.append(table[0])
    return table_names




def get_column_names(conn, table_name):
    """Return a list of column names."""
    column_names = []
    columns = conn.execute(f"PRAGMA table_info('{table_name}');").fetchall()
    for col in columns:
        column_names.append(col[1])
    return column_names




def get_database_info(conn):
    """Return a list of dicts containing the table name and columns for each table in the database."""
    table_dicts = []
    for table_name in get_table_names(conn):
        columns_names = get_column_names(conn, table_name)
        table_dicts.append({"table_name": table_name, "column_names": columns_names})
    return table_dicts


现在可以使用这些实用函数来提取数据库模式的表示。

database_schema_dict = get_database_info(conn)
database_schema_string = "\n".join(
    [
        f"Table: {table['table_name']}\nColumns: {', '.join(table['column_names'])}"
        for table in database_schema_dict
    ]
)


与之前一样,我们将为希望 API 为其生成参数的函数定义一个函数规范。请注意,我们正在将数据库模式插入到函数规范中。这对于模型了解这一点很重要。

tools = [
    {
        "type": "function",
        "function": {
            "name": "ask_database",
            "description": "Use this function to answer user questions about music. Input should be a fully formed SQL query.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": f"""
                                SQL query extracting info to answer the user's question.
                                SQL should be written using this database schema:
                                {database_schema_string}
                                The query should be returned in plain text, not in JSON.
                                """,
                    }
                },
                "required": ["query"],
            },
        }
    }
]


执行 SQL 查询

(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#executing-sql-queries)

现在让我们实现实际执行数据库查询的函数。

def ask_database(conn, query):
    """Function to query SQLite database with a provided SQL query."""
    try:
        results = str(conn.execute(query).fetchall())
    except Exception as e:
        results = f"query failed with error: {e}"
    return results


使用 Chat Completions API 调用函数的步骤:

(https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models#steps-to-invoke-a-function-call-using-chat-completions-api)

步骤 1:向模型提示可能导致模型选择要使用的工具的内容。工具的描述(例如函数名称和签名)在“工具”列表中定义,并在 API 调用中传递给模型。如果选择,函数名称和参数将包含在响应中。

步骤 2:通过编程检查模型是否想要调用函数。如果是,则继续执行步骤 3。
步骤 3:从响应中提取函数名称和参数,使用参数调用该函数。将结果附加到消息中。
步骤 4:使用消息列表调用聊天完成 API 以获取响应。

messages = [{
    "role":"user", 
    "content": "What is the name of the album with the most tracks?"
}]


response = client.chat.completions.create(
    model='myqwen2', 
    messages=messages, 
    tools= tools, 
    tool_choice="auto"
)


# Append the message to messages list
response_message = response.choices[0].message 
messages.append(response_message)


print(response_message)
ChatCompletionMessage(content='', role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_23nnhlv6', function=Function(arguments='{"query":"SELECT Album.Title FROM Album JOIN Track ON Album.AlbumId = Track.AlbumId GROUP BY Album.Title ORDER BY COUNT(*) DESC LIMIT 1"}', name='ask_database'), type='function')])
# Step 2: determine if the response from the model includes a tool call.   
tool_calls = response_message.tool_calls
if tool_calls:
    # If true the model will return the name of the tool / function to call and the argument(s)  
    tool_call_id = tool_calls[0].id
    tool_function_name = tool_calls[0].function.name
    tool_query_string = json.loads(tool_calls[0].function.arguments)['query']


    # Step 3: Call the function and retrieve results. Append the results to the messages list.      
    if tool_function_name == 'ask_database':
        results = ask_database(conn, tool_query_string)


        messages.append({
            "role":"tool", 
            "tool_call_id":tool_call_id, 
            "name": tool_function_name, 
            "content":results
        })


        # Step 4: Invoke the chat completions API with the function response appended to the messages list
        # Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'
        model_response_with_function_call = client.chat.completions.create(
            model="myqwen2",
            messages=messages,
        )  # get a new response from the model where it can see the function response
        print(model_response_with_function_call.choices[0].message.content)
    else: 
        print(f"Error: function {tool_function_name} does not exist")
else: 
    # Model did not identify a function to call, result can be returned to the user 
    print(response_message.content) 

The album "Greatest Hits" contains the most tracks

欢迎点赞关注我,获取更多关于 AI 的前沿资讯。别忘了将今天的内容分享给你的朋友们,让我们一起见证 AI 技术的飞跃!学习商务交流



相关推荐

电脑不能正常关机(电脑强制关机后无法正常启动)

1解决电脑无法关机的方法2电脑无法关机可能是由于软件冲突、系统故障或者硬件问题等原因造成的。可以尝试以下几种解决方法:a)强制关机:按住电脑主机上的电源按钮直到电脑完全关闭,但这种方法可能会...

qq对战平台下载官网(qq对战平台安卓版)

1.在左边游戏分类上选择你想要玩的游戏,双击游戏名称(cs,魔兽,星际);2.右边房间列表出现不同游戏版本的房间,请对应你安装的游戏的版本选择房间,双击进入;3.点击“设置”按钮,弹出QQ对战平台...

分区助手专业版下载(分区助手6.0中文版)

区别主要有以下几点:1.功能差异:傲梅分区助手绿色版相对于专业版功能较少,仅提供基本的分区操作,如创建、删除、合并、移动、调整分区大小等,而专业版则提供更多的高级功能,如转换磁盘类型、拷贝分区、修复...

驱动程序在哪里找(驱动程序在哪里找出来)

驱动程序在电脑中可以这样查找:1.打开设备管理器:在Windows系统中,你可以通过“控制面板”>“设备管理器”来打开设备管理器。2.查找驱动程序:在设备管理器中,你可以看到你的电脑中安装的...

用光盘怎么重装系统(用光盘怎么重装系统win7)

惠普笔记本有系统光盘重装系统的具体步骤如下:1、当我们用光盘来进行系统重装的时候,我们需要准备好微软系统的系统盘。2、首先我们打开电脑机箱上的光驱,直接放入光碟,此时电脑会自动重启进入读取系统光盘操作...

质量管理体系有哪些(永辉质量管理体系有哪些)

   常见4种。见下:  质量管理体系常用的包括ISO9000质量管理体系、精益生产管理体系、六西格玛质量管理体系、资质体系等。ISO9000...

联想电脑如何截屏截图(联想电脑上怎样截图)

用lenovo电脑如果想截屏,我们可以采用了以下几个方法。一个方法就是用笔记本电脑截屏的快捷键来进行截屏。我们在浏览网页的时候,如果想把网页截屏下来,可以用笔记本电脑的Prtsc键。这个键就是截屏的...

软件升级后怎么恢复以前的版本

不能恢复了,出现新版本后低版本验证就会关闭。1、软件升级软件升级,是指软件开发者在编写软件的时候,由于设计人员考虑不全面或程序功能不完善,在软件发行后,通过对程序的修改或加入新的功能后,以补丁的形式发...

好看的鼠标指针图片(漂亮鼠标指针下载)

鼠标指针是计算机用户界面中常见的元素之一,其基本形状主要有以下几种:1.默认指针:通常是一个箭头的形状,这是最常见的鼠标指针。当系统处于就绪、等待状态,或者用户没有进行任何操作时,鼠标指针会显示为这...

128键盘键位图高清图(128键机械键盘键位图)

“Fn”键通常是功能键的简称。在惠普128fn键盘上,按下“Fn”键可以启用键盘上的其他功能按键。这些功能按键通常印有其他标志,如调节亮度、音量、飞行模式、触控板开关等。惠普128fn使用说明。首先需...

给电脑设置开机密码(电脑开关机密码设置方法)

方法如下1.建立开机密码。进入BIOS系统界面,点击键盘的Del按键,点击选项中的设置用户密码。设置完毕进入高级设置,点击密码选项列表的系统密码,点击保存并推出即可;2.设置系统密码。进入系统界...

用u盘怎么安装系统到电脑上(从u盘怎么安装系统)

首先将要安装的电脑系统下载到u盘里面。然后将u盘插入电脑,确保电脑识别成功。最后打开u盘,双击里面的系统安装包,点击安装即可。以下是重装电脑系统的一般步骤:在正常可用的电脑上下载并安装一个制作启动U盘...

百度输入法下载免费下载(百度输入法安卓版免费下载)

不同的车载导航系统的添加方法:1、车载导航为安卓系统:在电脑中下载第三方安卓输入法安装包,用u盘拷贝安装包,传入车载导航中,在导航中选择安装即可。2、车载导航为ce系统:此系统不支持额外安装输入法,只...

hp电脑如何进入bios(hp电脑如何进入u盘启动界面)

请看下文在重装电脑或是需要进行硬件设置的时候,就需要进入BIOS进行设置,那么怎么样进入电脑的BIOS呢?下面就以HP电脑来说明进入BIOS的方法吧。1.按电源键启动电脑在屏幕刚亮时不停按下F10...

flash下载电脑版下载(flash软件电脑版下载)
flash下载电脑版下载(flash软件电脑版下载)

AdobeFlashPlayer,是一种广泛使用专有的多媒体程序播放器,今天来分享一下电脑如何安装flashplayer,希望对大家有所帮助;1、首先打开电脑桌面【浏览器】,搜索【AdobeFlashPlayer】,2、点击第一个网址进入【...

2025-11-07 19:05 liuian