A2UI Launched: Full CopilotKit support at launch!

A2UI Launched: CopilotKit has partnered with Google to deliver full support in both CopilotKit and AG-UI!

Check it out
LogoLogo
  • Overview
  • Integrations
  • API Reference
  • Copilot Cloud
Slanted end borderSlanted end border
Slanted start borderSlanted start border
Select integration...

Please select an integration to view the sidebar content.

Human in the Loop (HITL)

Interrupt Based

Learn how to implement Human-in-the-Loop (HITL) using a interrupt-based flow.

This example demonstrates interrupt-based human-in-the-loop (HITL) in the CopilotKit Feature Viewer.

What is this?

LangGraph's interrupt flow provides an intuitive way to implement Human-in-the-loop workflows.

This guide will show you how to both use interrupt and how to integrate it with CopilotKit.

When should I use this?

Human-in-the-loop is a powerful way to implement complex workflows that are production ready. By having a human in the loop, you can ensure that the agent is always making the right decisions and ultimately is being steered in the right direction.

Interrupt-based flows are a very intuitive way to implement HITL. Instead of having a node await user input before or after its execution, nodes can be interrupted in the middle of their execution to allow for user input. The trade-off is that the agent is not aware of the interaction, however CopilotKit's SDKs provide helpers to alleviate this.

Implementation

Run and connect your agent

You'll need to run your agent and connect it to CopilotKit before proceeding.

If you don't already have CopilotKit and your agent connected, choose one of the following options:

Install the CopilotKit SDK

Any LangGraph agent can be used with CopilotKit. However, creating deep agentic experiences with CopilotKit requires our LangGraph SDK.

uv add copilotkit
poetry add copilotkit
pip install copilotkit --extra-index-url https://copilotkit.gateway.scarf.sh/simple/
conda install copilotkit -c copilotkit-channel
npm install @copilotkit/sdk-js

Set up your agent state

We're going to have the agent ask us to name it, so we'll need a state property to store the name.

agent.py
# ...
from copilotkit import CopilotKitState # extends MessagesState
# ...

# This is the state of the agent.
# It inherits from the CopilotKitState properties from CopilotKit.
class AgentState(CopilotKitState):
    agent_name: str
agent-js/src/agent.ts
// ...
import { Annotation } from "@langchain/langgraph";
import { CopilotKitStateAnnotation } from "@copilotkit/sdk-js/langgraph";
// ...

// This is the state of the agent.
// It inherits from the CopilotKitState properties from CopilotKit.
export const AgentStateAnnotation = Annotation.Root({
  agentName: Annotation<string>,
  ...CopilotKitStateAnnotation.spec,
});
export type AgentState = typeof AgentStateAnnotation.State;

Call interrupt in your LangGraph agent

Now we can call interrupt in our LangGraph agent.

Your agent will not be aware of the interrupt interaction by default in LangGraph.

If you want this behavior, see the section on it below.

agent.py
from langgraph.types import interrupt 
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI
from copilotkit import CopilotKitState

# add the agent state definition from the previous step
class AgentState(CopilotKitState):
    agent_name: str

def chat_node(state: AgentState, config: RunnableConfig):
    if not state.get("agent_name"):
        # Interrupt and wait for the user to respond with a name
        state["agent_name"] = interrupt("Before we start, what would you like to call me?") 

    # Tell the agent its name
    system_message = SystemMessage(
        content=f"You are a helpful assistant named {state.get('agent_name')}..."
    )

    response = ChatOpenAI(model="gpt-4o").invoke(
        [system_message, *state["messages"]],
        config
    )

    return {
        **state,
        "messages": response,
    }
agent-js/src/agent.ts
import { interrupt } from "@langchain/langgraph"; 
import { SystemMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";

// add the agent state definition from the previous step
export const AgentStateAnnotation = Annotation.Root({
    agentName: Annotation<string>,
    ...CopilotKitStateAnnotation.spec,
});
export type AgentState = typeof AgentStateAnnotation.State;

async function chat_node(state: AgentState, config: RunnableConfig) {
    const agentName = state.agentName
    ?? interrupt("Before we start, what would you like to call me?"); 

    // Tell the agent its name
    const systemMessage = new SystemMessage({
        content: `You are a helpful assistant named ${agentName}...`,
    });

    const response = await new ChatOpenAI({ model: "gpt-4o" }).invoke(
        [systemMessage, ...state.messages],
        config
    );

    return {
        ...state,
        agentName,
        messages: response,
    };
}

Handle the interrupt in your frontend

At this point, your LangGraph agent's interrupt will be called. However, we currently have no handling for rendering or responding to the interrupt in the frontend.

To do this, we'll use the useLangGraphInterrupt hook, give it a component to render, and then call resolve with the user's response.

app/page.tsx
import { useLangGraphInterrupt } from "@copilotkit/react-core"; 
// ...

const YourMainContent = () => {
// ...
// styles omitted for brevity
useLangGraphInterrupt({
    render: ({ event, resolve }) => (
        <div>
            <p>{event.value}</p>
            <form onSubmit={(e) => {
                e.preventDefault();
                resolve((e.target as HTMLFormElement).response.value);
            }}>
                <input type="text" name="response" placeholder="Enter your response" />
                <button type="submit">Submit</button>
            </form>
        </div>
    )
});
// ...

return <div>{/* ... */}</div>
}

Give it a try!

Try talking to your agent, you'll see that it now pauses execution and waits for you to respond!

Advanced usage

Condition UI executions

When rendering multiple interrupt events in the agent, there could be conflicts between multiple useLangGraphInterrupt hooks calls in the UI. For this reason, the hook can take an enabled argument which will apply it conditionally:

Define multiple interrupts

First, let's define two different interrupts. We will include a "type" property to differentiate them.

agent.py
from langgraph.types import interrupt 
from langchain_core.messages import SystemMessage
from langchain_openai import ChatOpenAI

# ... your full state definition

def chat_node(state: AgentState, config: RunnableConfig):

  state["approval"] = interrupt({ "type": "approval", "content": "please approve" }) 

  if not state.get("agent_name"):
    # Interrupt and wait for the user to respond with a name
    state["agent_name"] = interrupt({ "type": "ask", "content": "Before we start, what would you like to call me?" }) 

  # Tell the agent its name
  system_message = SystemMessage(
    content=f"You are a helpful assistant..."
  )

  response = ChatOpenAI(model="gpt-4o").invoke(
    [system_message, *state["messages"]],
    config
  )

  return {
    **state,
    "messages": response,
  }
agent-js/src/agent.ts
import { interrupt } from "@langchain/langgraph"; 
import { SystemMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";

// ... your full state definition

async function chat_node(state: AgentState, config: RunnableConfig) {
  state.approval = await interrupt({ type: "approval", content: "please approve" }); 

  if (!state.agentName) {
    state.agentName = await interrupt({ type: "ask", content: "Before we start, what would you like to call me?" }); 
  }

  // Tell the agent its name
  const systemMessage = new SystemMessage({
    content: `You are a helpful assistant...`,
  });

  const response = await new ChatOpenAI({ model: "gpt-4o" }).invoke(
    [systemMessage, ...state.messages],
    config
  );

  return {
    ...state,
    messages: response,
  };
}

Add multiple frontend handlers

With the differentiator in mind, we will add a handler that takes care of any "ask" and any "approve" types. With two useLangGraphInterrupt hooks in our page, we can leverage the enabled property to enable each in the right time:

app/page.tsx
import { useLangGraphInterrupt } from "@copilotkit/react-core"; 
// ...

const ApproveComponent = ({ content, onAnswer }: { content: string; onAnswer: (approved: boolean) => void }) => (
    // styles omitted for brevity
    <div>
        <h1>Do you approve?</h1>
        <button onClick={() => onAnswer(true)}>Approve</button>
        <button onClick={() => onAnswer(false)}>Reject</button>
    </div>
)

const AskComponent = ({ question, onAnswer }: { question: string; onAnswer: (answer: string) => void }) => (
// styles omitted for brevity
    <div>
        <p>{question}</p>
        <form onSubmit={(e) => {
            e.preventDefault();
            onAnswer((e.target as HTMLFormElement).response.value);
        }}>
            <input type="text" name="response" placeholder="Enter your response" />
            <button type="submit">Submit</button>
        </form>
    </div>
)

const YourMainContent = () => {
    // ...
    useLangGraphInterrupt({
        enabled: ({ eventValue }) => eventValue.type === 'ask',
        render: ({ event, resolve }) => (
            <AskComponent question={event.value.content} onAnswer={answer => resolve(answer)} />
        )
    });

    useLangGraphInterrupt({
        enabled: ({ eventValue }) => eventValue.type === 'approval',
        render: ({ event, resolve }) => (
            <ApproveComponent content={event.value.content} onAnswer={answer => resolve(answer)} />
        )
    });

    // ...
}

Preprocessing of an interrupt and programmatically handling an interrupt value

When opting for custom chat UI, some cases may require pre-processing of the incoming values of interrupt event or even resolving it entirely without showing a UI for it. This can be achieved using the handler property, which is not required to return a React component.

The return value of the handler will be passed to the render method as the result argument.

app/page.tsx
// We will assume an interrupt event in the following shape
type Department = 'finance' | 'engineering' | 'admin'
interface AuthorizationInterruptEvent {
    type: 'auth',
    accessDepartment: Department,
}

import { useLangGraphInterrupt } from "@copilotkit/react-core";

const YourMainContent = () => {
    const [userEmail, setUserEmail] = useState({ email: 'example@user.com' })
    function getUserByEmail(email: string): { id: string; department: Department } {
        // ... an implementation of user fetching
    }

    // ...
    // styles omitted for brevity
    useLangGraphInterrupt({
        handler: async ({ result, event, resolve }) => {
            const { department } = await getUserByEmail(userEmail)
            if (event.value.accessDepartment === department || department === 'admin') {
                // Following the resolution of the event, we will not proceed to the render method
                resolve({ code: 'AUTH_BY_DEPARTMENT' })
                return;
            }

            return { department, userId }
        },
        render: ({ result, event, resolve }) => (
            <div>
                <h1>Request for {event.value.type}</h1>
                <p>Members from {result.department} department cannot access this information</p>
                <p>You can request access from an administrator to continue.</p>
                <button
                    onClick={() => resolve({ code: 'REQUEST_AUTH', data: { department: result.department, userId: result.userId } })}
                >
                    Request Access
                </button>
                <button
                    onClick={() => resolve({ code: 'CANCEL' })}
                >
                    Cancel
                </button>
            </div>
        )
    });
    // ...

    return <div>{/* ... */}</div>
}
PREV
Human in the Loop (HITL)
Slanted end borderSlanted end border
Slanted start borderSlanted start border
NEXT
Frontend Tool Based

On this page

What is this?
When should I use this?
Implementation
Run and connect your agent
Install the CopilotKit SDK
Set up your agent state
Call interrupt in your LangGraph agent
Handle the interrupt in your frontend
Give it a try!
Advanced usage
Condition UI executions
Define multiple interrupts
Add multiple frontend handlers
Preprocessing of an interrupt and programmatically handling an interrupt value