Replies: 5 comments 4 replies
-
FWIW I found out I can make tool calls happen by awkwardly formatting the import {unstable_runPendingTools} from "assistant-stream";
const MyAdapter = {
run: async function* ({messages, abortSignal, context}) {
const messageContent = {
content: [
{
type: "text",
text: 'My Text!'
},
{
type: 'tool-call',
toolName: 'my_tool',
toolCallId: 'call_2310crn8943cr43',
args: {
foo: 'bar',
}
}
]
}
const message = {
content: messageContent,
parts: messageContent
};
const updatedMessage = await unstable_runPendingTools(message, context.tools, abortSignal)
yield updatedMessage;
}
} Is it supposed to be done like this? I do not mean to complain of course - but this feels a bit like a workaround for something that should be working OOTB. |
Beta Was this translation helpful? Give feedback.
-
Hey, currently the execution of tool calls is the responsibility of ChatModelAdapter. At the same time, we do not provide a nice API for you to run pending tools, sorry about that. The We're in the migration process to rename |
Beta Was this translation helpful? Give feedback.
-
@mattotodd , @Biont - did you solve this ? After hours of trying to understand this I came across this thread and finally managed to call a tool with For example, the returned content is:
Expectation:
@Yonom appreciate your response as well |
Beta Was this translation helpful? Give feedback.
-
why is it so hard to implement this very basic feature in assistant-ui? just calling registered tools and given back the result to llm, right?? Very confused |
Beta Was this translation helpful? Give feedback.
-
For What It's Worth - here's an working example for me (written by gpt5 with the context of the whole assistant-ui context) import { ChatModelAdapter, useLocalRuntime, type ThreadMessage } from "@assistant-ui/react";
import { ReactNode } from "react";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
/**
* OpenAI/Azure compatible tool and message types
*/
type OAIFunctionTool = {
type: "function";
function: {
name: string;
description?: string;
parameters?: unknown;
};
};
type OAIMessage =
| { role: "system"; content: string }
| { role: "user"; content: string }
| {
role: "assistant";
content?: string;
tool_calls?: Array<{
id: string;
type: "function";
function: { name: string; arguments: string };
}>;
}
| { role: "tool"; content: string; tool_call_id: string; name?: string };
/**
* Tool-call state for the current round (used to keep cards visible)
*/
type VisibleToolCall = {
id: string;
name: string;
argsText: string;
args: any;
result?: any;
isError?: boolean;
artifact?: any;
};
/**
* Map assistant-ui history to OpenAI/Azure chat messages,
* including replay of assistant tool_calls and subsequent tool results.
*/
function mapHistoryToOAIMessages(history: readonly ThreadMessage[]): OAIMessage[] {
const result: OAIMessage[] = [];
for (const m of history ?? []) {
const parts = (m.content ?? []) as Array<any>;
if (m.role === "assistant") {
const contentText = parts
.filter((p) => p.type === "text" && typeof p.text === "string")
.map((p) => p.text)
.join("");
const toolCalls = parts
.filter((p) => p.type === "tool-call")
.map((p) => {
const argsText =
typeof p.argsText === "string"
? p.argsText
: p.args
? JSON.stringify(p.args)
: "{}";
return {
id: p.toolCallId ?? `tc_${p.toolName ?? "tool"}`,
type: "function" as const,
function: { name: p.toolName, arguments: argsText },
};
});
if (contentText || toolCalls.length > 0) {
result.push({
role: "assistant",
...(contentText ? { content: contentText } : {}),
...(toolCalls.length > 0 ? { tool_calls: toolCalls } : {}),
});
}
// Convert tool-call parts that already contain a result into tool messages
for (const p of parts) {
if (p.type === "tool-call" && p.result !== undefined) {
result.push({
role: "tool",
tool_call_id: p.toolCallId ?? `tc_${p.toolName ?? "tool"}`,
name: p.toolName,
content: typeof p.result === "string" ? p.result : JSON.stringify(p.result),
});
}
}
} else if (m.role === "user") {
const contentText = parts
.filter((p) => p.type === "text" && typeof p.text === "string")
.map((p) => p.text)
.join("");
if (contentText) {
result.push({ role: "user", content: contentText });
}
} else if (m.role === "system") {
const contentText = parts
.filter((p) => p.type === "text" && typeof p.text === "string")
.map((p) => p.text)
.join("");
if (contentText) {
result.push({ role: "system", content: contentText });
}
}
}
return result;
}
/**
* Convert assistant-ui tool registry to OpenAI/Azure tool list
*/
function mapToolsToOAIFunctions(
tools: Record<string, any> | undefined,
): OAIFunctionTool[] {
return Object.entries(tools ?? {}).map(([name, t]) => ({
type: "function",
function: {
name,
description: t.description,
...(t.parameters ? { parameters: zodToJsonSchema(t.parameters as z.ZodType) } : {}),
},
}));
}
/**
* Parse a single SSE line of the form "data: {...}"
*/
function parseSSELine(line: string): { done?: true; data?: any } | null {
if (!line.startsWith("data: ")) return null;
const payload = line.slice(6);
if (payload === "[DONE]") return { done: true };
try {
return { data: JSON.parse(payload) };
} catch {
return null;
}
}
/**
* Execute a single tool with JSON parse + zod validation + abort support
*/
async function executeToolSafely(opts: {
tools: Record<string, any>;
toolCallId: string;
toolName: string;
argsText: string;
abortSignal: AbortSignal;
}) {
const { tools, toolCallId, toolName, argsText, abortSignal } = opts;
const tool = tools?.[toolName];
if (!tool || typeof tool.execute !== "function") {
return { isError: true, result: `Tool not found: ${toolName}` };
}
let args: any = {};
try {
args = argsText ? JSON.parse(argsText) : {};
} catch (e) {
return { isError: true, result: `Invalid JSON arguments: ${(e as Error).message}` };
}
if (tool.parameters instanceof z.ZodType) {
const parsed = tool.parameters.safeParse(args);
if (!parsed.success) {
return {
isError: true,
result: `Parameter validation failed: ${JSON.stringify(parsed.error.issues)}`,
};
}
}
try {
const res = await tool.execute(args, { toolCallId, abortSignal });
return { isError: false, result: res };
} catch (e) {
return { isError: true, result: String(e) };
}
}
/**
* Complete adapter:
* - Multi-round tool loop (until finish_reason === "stop")
* - Snapshot-style yields (text + all tool-call cards) to keep cards visible
* - Stream deltas for text/tool_calls; execute tools only after tool_calls finish
* - Yield tool-call with result/isError (no "tool-result" part type)
* - Replay tool results as role:"tool" messages into conversation for next round
*/
export const MyCompanyAzureAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
// 1) Map history with prior tool-calls/results for correct model context
let conversation: OAIMessage[] = mapHistoryToOAIMessages(messages);
const oaiTools = mapToolsToOAIFunctions(context.tools);
// Controls whether to execute multiple tools in parallel
const executeInParallel = true;
// Persist executed tool-calls across rounds so their cards remain visible
let persistedToolCalls = new Map<string, VisibleToolCall>();
outer: while (true) {
const payload = {
model: "gpt-4o",
messages: conversation,
tools: oaiTools,
stream: true,
temperature: 0.7,
top_p: 0.7,
max_tokens: 16384,
};
// 2) Get gateway token
const apiKey = await fetch(
"<FETCH_API_KEY>",
{ method: "GET", redirect: "follow" },
)
.then((r) => r.json())
.then((d) => d.data.accessToken);
// 3) Stream from Azure
const res = await fetch("<AZURE_API_URL>", {
method: "POST",
headers: { "Content-Type": "application/json", "api-key": apiKey },
body: JSON.stringify(payload),
signal: abortSignal,
});
if (!res.ok || !res.body) {
throw new Error(`Azure request failed: ${res.status} ${res.statusText}`);
}
const reader = res.body.getReader();
const decoder = new TextDecoder("utf-8");
let buf = "";
// Snapshot state for this round
let accumulatedText = "";
// Key by toolCallId; if the delta doesn't include id initially, use a provisional id
const visibleToolCalls = new Map<string, VisibleToolCall>();
// Map streaming index -> toolCallId (to handle late IDs)
const indexToId = new Map<number, string>();
// Track finish reason
let finishReason: string | undefined;
// Helper: yield a full snapshot (text + all tool cards)
const yieldSnapshot = () => {
// Merge previously executed tool-calls (persisted) with current streaming ones
const merged = new Map<string, VisibleToolCall>([
...persistedToolCalls,
...visibleToolCalls,
]);
const toolParts = Array.from(merged.values()).map((c) => ({
type: "tool-call" as const,
toolCallId: c.id,
toolName: c.name,
args: c.args ?? {},
argsText: c.argsText,
...(c.result !== undefined ? { result: c.result } : {}),
...(c.isError !== undefined ? { isError: c.isError } : {}),
...(c.artifact !== undefined ? { artifact: c.artifact } : {}),
}));
// Show tool cards BEFORE assistant text
const parts = [
...toolParts,
...(accumulatedText ? [{ type: "text" as const, text: accumulatedText }] : []),
];
return { content: parts };
};
// 4) Read SSE stream
sse: while (true) {
const { done, value } = await reader.read();
if (done) break;
buf += decoder.decode(value, { stream: true });
const lines = buf.split(/\r?\n/);
buf = lines.pop() || "";
for (const line of lines) {
const parsed = parseSSELine(line);
if (!parsed) continue;
if (parsed.done) break sse;
const json = parsed.data;
const choice = json?.choices?.[0];
const delta = choice?.delta;
if (choice?.finish_reason) {
finishReason = choice.finish_reason;
}
if (delta?.content) {
accumulatedText += delta.content;
yield yieldSnapshot();
}
if (delta?.tool_calls) {
for (const tc of delta.tool_calls) {
const index: number = tc.index;
const maybeId = tc.id as string | undefined;
// Stabilize ID for this index
const existingId = indexToId.get(index);
let toolCallId: string;
if (!existingId) {
toolCallId = maybeId ?? `tool_${index}`;
indexToId.set(index, toolCallId);
} else {
toolCallId = existingId;
if (maybeId && maybeId !== existingId) {
// if later delta reveals the real id, migrate the entry
const existing = visibleToolCalls.get(existingId);
indexToId.set(index, maybeId);
if (existing) {
visibleToolCalls.delete(existingId);
toolCallId = maybeId;
visibleToolCalls.set(toolCallId, { ...existing, id: toolCallId });
} else {
toolCallId = maybeId;
}
}
}
const name = tc.function?.name ?? "";
const argsTextDelta = tc.function?.arguments ?? "";
const prev = visibleToolCalls.get(toolCallId) ?? {
id: toolCallId,
name,
argsText: "",
args: {},
};
const newArgsText = (prev.argsText ?? "") + argsTextDelta;
let parsedArgs = prev.args;
try {
parsedArgs = newArgsText ? JSON.parse(newArgsText) : {};
} catch {
// keep previous args until full JSON is available
}
visibleToolCalls.set(toolCallId, {
...prev,
name: name || prev.name,
argsText: newArgsText,
args: parsedArgs,
});
}
// Snapshot yield with all current tool cards
yield yieldSnapshot();
}
}
}
// 5) If tool_calls: execute tools -> yield updated cards -> add tool messages -> next round
if (finishReason === "tool_calls" && visibleToolCalls.size > 0) {
const calls = Array.from(visibleToolCalls.values());
// Execute tools (parallel or sequential)
const tasks = calls.map((c) =>
executeToolSafely({
tools: context.tools ?? {},
toolCallId: c.id,
toolName: c.name,
argsText: c.argsText,
abortSignal,
}).then((exec) => ({ call: c, exec })),
);
const execResults = executeInParallel
? await Promise.all(tasks)
: await tasks.reduce<Promise<any[]>>(
async (p, t) => [...(await p), await t],
Promise.resolve([]),
);
// Update visible cards with results (keep cards visible)
for (const { call, exec } of execResults) {
const current = visibleToolCalls.get(call.id);
if (current) {
visibleToolCalls.set(call.id, {
...current,
result: exec.result,
isError: !!exec.isError,
});
}
}
// Snapshot yield after execution
yield yieldSnapshot();
// Persist executed tool-calls so they remain visible during the next round
persistedToolCalls = new Map<string, VisibleToolCall>([
...persistedToolCalls,
...visibleToolCalls,
]);
// Build assistant tool_calls and subsequent tool messages for next round
const assistantToolCalls: Array<{
id: string;
type: "function";
function: { name: string; arguments: string };
}> = calls.map((c) => ({
id: c.id,
type: "function" as const,
function: { name: c.name, arguments: c.argsText },
}));
const toolMessages: OAIMessage[] = execResults.map(({ call, exec }) => ({
role: "tool",
tool_call_id: call.id,
name: call.name,
content: typeof exec.result === "string" ? exec.result : JSON.stringify(exec.result),
}));
// Append into conversation and continue outer loop
conversation = [...conversation, { role: "assistant", tool_calls: assistantToolCalls }, ...toolMessages];
continue outer;
}
// 6) Otherwise we are done for this round (stop or no tools). Finalize text if any.
if (accumulatedText) {
// Optionally persist final text into conversation (not strictly required for UI)
conversation = [...conversation, { role: "assistant", content: accumulatedText }];
// Final snapshot should include persisted tool-call cards BEFORE text
yield {
content: [
...Array.from(persistedToolCalls.values()).map((c) => ({
type: "tool-call" as const,
toolCallId: c.id,
toolName: c.name,
args: c.args ?? {},
argsText: c.argsText,
...(c.result !== undefined ? { result: c.result } : {}),
...(c.isError !== undefined ? { isError: c.isError } : {}),
...(c.artifact !== undefined ? { artifact: c.artifact } : {}),
})),
{ type: "text" as const, text: accumulatedText },
],
};
}
break outer;
}
},
};
export function MyCompanyRuntimeProvider({ children }: { children: ReactNode }) {
const runtime = useLocalRuntime(MyCompanyAzureAdapter);
return <AssistantRuntimeProvider runtime={runtime}>{children}</AssistantRuntimeProvider>;
} |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone, and thanks for this amazing project.
I am trying to wrap my head around a pure-frontend (browser only) assistant UX with tool calling. It should only stream responses from an external LLM and handle all tool calls from within the browser.
It was really easy to set up the basic UI and see my first Generative UI in action (registering tools, passing them to the LLM, seeing the tool UI being rendered as a result).
However, even after hours of trying and reading documentation, I have never seen
myTool.execute()
being called:MyTool
MyToolUI
I tried to reduce the issue to the most basic runtime imaginable:
MyRunTimeProvider
It simply yields a static text and tool call to take any complexity with foreign APIs out of the picture. I can see the resulting messages just fine:
However, while the UI is shown, the tool has never been executed. I spent some hours reading through code and documentation and it is hard for me to understand when this is even supposed to take place. I found
toolResultStream
to be the only instance that actually invokes individual tools, but it only seems to be wired up toDangerousInBrowserAdapter
from@assistant-ui/react-edge
which does not seem to overlap with myLocalRuntime
at all it seems.The documentation never mentions any specific steps to take to make tool calling 100% functional, at one point it specifically mentions they are being "executed", so I'm not sure what to make of this. Is this a bug/regression perhaps?
Beta Was this translation helpful? Give feedback.
All reactions