Skip to content

Agents#

ragbits.agents.AgentOptions #

Bases: Options, Generic[LLMClientOptionsT]

Options for the agent run.

model_config class-attribute instance-attribute #

model_config = ConfigDict(extra='allow', arbitrary_types_allowed=True)

llm_options class-attribute instance-attribute #

llm_options: LLMClientOptionsT | None | NotGiven = NOT_GIVEN

The options for the LLM.

max_turns class-attribute instance-attribute #

max_turns: int | None | NotGiven = NOT_GIVEN

The maximum number of turns the agent can take, if NOT_GIVEN, it defaults to 10, if None, agent will run forever

max_total_tokens class-attribute instance-attribute #

max_total_tokens: int | None | NotGiven = NOT_GIVEN

The maximum number of tokens the agent can use, if NOT_GIVEN or None, agent will run forever

max_prompt_tokens class-attribute instance-attribute #

max_prompt_tokens: int | None | NotGiven = NOT_GIVEN

The maximum number of prompt tokens the agent can use, if NOT_GIVEN or None, agent will run forever

max_completion_tokens class-attribute instance-attribute #

max_completion_tokens: int | None | NotGiven = NOT_GIVEN

The maximum number of completion tokens the agent can use, if NOT_GIVEN or None, agent will run forever

log_reasoning class-attribute instance-attribute #

log_reasoning: bool = False

Whether to log/persist reasoning traces for debugging and evaluation.

parallel_tool_calling class-attribute instance-attribute #

parallel_tool_calling: bool = False

Whether to run the tools concurrently if multiple of them are requested by an LLM. Synchronous tools will be run in a separate thread using asyncio.to_thread

dict #

dict() -> dict[str, Any]

Creates a dictionary representation of the Options instance. If a value is None, it will be replaced with a provider-specific not-given sentinel.

RETURNS DESCRIPTION
dict[str, Any]

A dictionary representation of the Options instance.

Source code in packages/ragbits-core/src/ragbits/core/options.py
def dict(self) -> dict[str, Any]:  # type: ignore # mypy complains about overriding BaseModel.dict
    """
    Creates a dictionary representation of the Options instance.
    If a value is None, it will be replaced with a provider-specific not-given sentinel.

    Returns:
        A dictionary representation of the Options instance.
    """
    options = self.model_dump()

    return {
        key: self._not_given if value is None or isinstance(value, NotGiven) else value
        for key, value in options.items()
    }

ragbits.agents.Agent #

Agent(llm: LLM[LLMClientOptionsT], name: str | None = None, description: str | None = None, prompt: str | type[Prompt[PromptInputT, PromptOutputT]] | Prompt[PromptInputT, PromptOutputT] | None = None, *, history: ChatFormat | None = None, keep_history: bool = False, tools: list[Callable | Tool | Agent] | None = None, mcp_servers: list[MCPServer] | None = None, hooks: list[Hook] | None = None, default_options: AgentOptions[LLMClientOptionsT] | None = None)

Bases: ConfigurableComponent[AgentOptions[LLMClientOptionsT]], Generic[LLMClientOptionsT, PromptInputT, PromptOutputT]

Agent class that orchestrates the LLM and the prompt, and can call tools.

Current implementation is highly experimental, and the API is subject to change.

Initialize the agent instance.

PARAMETER DESCRIPTION
llm

The LLM to run the agent.

TYPE: LLM[LLMClientOptionsT]

name

Optional name of the agent. Used to identify the agent instance.

TYPE: str | None DEFAULT: None

description

Optional description of the agent.

TYPE: str | None DEFAULT: None

prompt

The prompt for the agent. Can be: - str: A string prompt that will be used as system message when combined with string input, or as the user message when no input is provided during run(). - type[Prompt]: A structured prompt class that will be instantiated with the input. - Prompt: Already instantiated prompt instance - None: No predefined prompt. The input provided to run() will be used as the complete prompt.

TYPE: str | type[Prompt[PromptInputT, PromptOutputT]] | Prompt[PromptInputT, PromptOutputT] | None DEFAULT: None

history

The history of the agent.

TYPE: ChatFormat | None DEFAULT: None

keep_history

Whether to keep the history of the agent.

TYPE: bool DEFAULT: False

tools

The tools available to the agent. Can be one of: * Callable - a function with typing of parameters and a docstring that will be sent to the LLM The output from the callable will be sent to the LLM as a result of a tool. To specify additional values to return, that are not passed to the LLM, use ToolReturn. If this callable returns a generator or async generator, the yielded values are yielded from the streaming agent as well. The exception is a ToolReturn, which is used to send the result to the LLM. The ToolReturn is expected to be yielded only once. * Agent - another instance of an Agent, with name and description * Tool - raw instance of a Tool

TYPE: list[Callable | Tool | Agent] | None DEFAULT: None

mcp_servers

The MCP servers available to the agent.

TYPE: list[MCPServer] | None DEFAULT: None

hooks

List of tool hooks to register for tool lifecycle events.

TYPE: list[Hook] | None DEFAULT: None

default_options

The default options for the agent run.

TYPE: AgentOptions[LLMClientOptionsT] | None DEFAULT: None

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def __init__(
    self,
    llm: LLM[LLMClientOptionsT],
    name: str | None = None,
    description: str | None = None,
    prompt: str | type[Prompt[PromptInputT, PromptOutputT]] | Prompt[PromptInputT, PromptOutputT] | None = None,
    *,
    history: ChatFormat | None = None,
    keep_history: bool = False,
    tools: list["Callable | Tool | Agent"] | None = None,
    mcp_servers: list[MCPServer] | None = None,
    hooks: list[Hook] | None = None,
    default_options: AgentOptions[LLMClientOptionsT] | None = None,
) -> None:
    """
    Initialize the agent instance.

    Args:
        llm: The LLM to run the agent.
        name: Optional name of the agent. Used to identify the agent instance.
        description: Optional description of the agent.
        prompt: The prompt for the agent. Can be:
            - str: A string prompt that will be used as system message when combined with string input,
                or as the user message when no input is provided during run().
            - type[Prompt]: A structured prompt class that will be instantiated with the input.
            - Prompt: Already instantiated prompt instance
            - None: No predefined prompt. The input provided to run() will be used as the complete prompt.
        history: The history of the agent.
        keep_history: Whether to keep the history of the agent.
        tools: The tools available to the agent. Can be one of:
            * Callable - a function with typing of parameters and a docstring that will be sent to the LLM
                The output from the callable will be sent to the LLM as a result of a tool.
                To specify additional values to return, that are not passed to the LLM, use ToolReturn.
                If this callable returns a generator or async generator, the yielded values are yielded from the
                streaming agent as well. The exception is a ToolReturn, which is used to send the result to the LLM.
                The ToolReturn is expected to be yielded only once.
            * Agent - another instance of an Agent, with name and description
            * Tool - raw instance of a Tool
        mcp_servers: The MCP servers available to the agent.
        hooks: List of tool hooks to register for tool lifecycle events.
        default_options: The default options for the agent run.
    """
    super().__init__(default_options)
    self.id = uuid.uuid4().hex[:8]
    self.llm = llm
    self.prompt = prompt
    self.name = name
    self.description = description
    self.tools = []
    for tool in tools or []:
        if isinstance(tool, Agent):
            self.tools.append(Tool.from_agent(tool))
        elif isinstance(tool, Tool):
            self.tools.append(tool)
        elif callable(tool):
            self.tools.append(Tool.from_callable(tool))
        else:
            raise ValueError(f"Unsupported type of a tool: {type(tool)}. Should be a Callable, Agent, or Tool")
    self.mcp_servers = mcp_servers or []
    self.history = history or []
    self.keep_history = keep_history
    self.hook_manager: HookManager[LLMClientOptionsT, PromptInputT, PromptOutputT] = HookManager(hooks)

    if getattr(self, "system_prompt", None) and not getattr(self, "input_type", None):
        raise ValueError(
            f"Agent {type(self).__name__} defines a system_prompt but has no input_type. ",
            "Use Agent.prompt decorator to properly assign it.",
        )

default_options instance-attribute #

default_options: OptionsT = default_options or options_cls()

options_cls class-attribute instance-attribute #

options_cls: type[AgentOptions] = AgentOptions

default_module class-attribute #

default_module: ModuleType | None = agents

configuration_key class-attribute #

configuration_key: str = 'agent'

user_prompt class-attribute #

user_prompt: str = '{{ input }}'

system_prompt class-attribute #

system_prompt: str | None = None

input_type class-attribute instance-attribute #

input_type: PromptInputT | None = None

prompt_cls class-attribute #

prompt_cls: type[Prompt] | None = None

id instance-attribute #

id = hex[:8]

llm instance-attribute #

llm = llm

prompt instance-attribute #

prompt = prompt

name instance-attribute #

name = name

description instance-attribute #

description = description

tools instance-attribute #

tools = []

mcp_servers instance-attribute #

mcp_servers = mcp_servers or []

history instance-attribute #

history = history or []

keep_history instance-attribute #

keep_history = keep_history

hook_manager instance-attribute #

hook_manager: HookManager[LLMClientOptionsT, PromptInputT, PromptOutputT] = HookManager(hooks)

subclass_from_config classmethod #

subclass_from_config(config: ObjectConstructionConfig) -> Self

Initializes the class with the provided configuration. May return a subclass of the class, if requested by the configuration.

PARAMETER DESCRIPTION
config

A model containing configuration details for the class.

TYPE: ObjectConstructionConfig

RETURNS DESCRIPTION
Self

An instance of the class initialized with the provided configuration.

RAISES DESCRIPTION
InvalidConfigError

The class can't be found or is not a subclass of the current class.

Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
@classmethod
def subclass_from_config(cls, config: ObjectConstructionConfig) -> Self:
    """
    Initializes the class with the provided configuration. May return a subclass of the class,
    if requested by the configuration.

    Args:
        config: A model containing configuration details for the class.

    Returns:
        An instance of the class initialized with the provided configuration.

    Raises:
        InvalidConfigError: The class can't be found or is not a subclass of the current class.
    """
    subclass = import_by_path(config.type, cls.default_module)
    if not issubclass(subclass, cls):
        raise InvalidConfigError(f"{subclass} is not a subclass of {cls}")

    return subclass.from_config(config.config)

subclass_from_factory classmethod #

subclass_from_factory(factory_path: str) -> Self

Creates the class using the provided factory function. May return a subclass of the class, if requested by the factory. Supports both synchronous and asynchronous factory functions.

PARAMETER DESCRIPTION
factory_path

A string representing the path to the factory function in the format of "module.submodule:factory_name".

TYPE: str

RETURNS DESCRIPTION
Self

An instance of the class initialized with the provided factory function.

RAISES DESCRIPTION
InvalidConfigError

The factory can't be found or the object returned is not a subclass of the current class.

Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
@classmethod
def subclass_from_factory(cls, factory_path: str) -> Self:
    """
    Creates the class using the provided factory function. May return a subclass of the class,
    if requested by the factory. Supports both synchronous and asynchronous factory functions.

    Args:
        factory_path: A string representing the path to the factory function
            in the format of "module.submodule:factory_name".

    Returns:
        An instance of the class initialized with the provided factory function.

    Raises:
        InvalidConfigError: The factory can't be found or the object returned
            is not a subclass of the current class.
    """
    factory = import_by_path(factory_path, cls.default_module)

    if asyncio.iscoroutinefunction(factory):
        try:
            loop = asyncio.get_running_loop()
            obj = asyncio.run_coroutine_threadsafe(factory, loop).result()
        except RuntimeError:
            obj = asyncio.run(factory())
    else:
        obj = factory()

    if not isinstance(obj, cls):
        raise InvalidConfigError(f"The object returned by factory {factory_path} is not an instance of {cls}")

    return obj

preferred_subclass classmethod #

preferred_subclass(config: CoreConfig, factory_path_override: str | None = None, yaml_path_override: Path | None = None) -> Self

Tries to create an instance by looking at project's component preferences, either from YAML or from the factory. Takes optional overrides for both, which takes a higher precedence.

PARAMETER DESCRIPTION
config

The CoreConfig instance containing preferred factory and configuration details.

TYPE: CoreConfig

factory_path_override

A string representing the path to the factory function in the format of "module.submodule:factory_name".

TYPE: str | None DEFAULT: None

yaml_path_override

A string representing the path to the YAML file containing the Ragstack instance configuration.

TYPE: Path | None DEFAULT: None

RAISES DESCRIPTION
InvalidConfigError

If the default factory or configuration can't be found.

Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
@classmethod
def preferred_subclass(
    cls, config: CoreConfig, factory_path_override: str | None = None, yaml_path_override: Path | None = None
) -> Self:
    """
    Tries to create an instance by looking at project's component preferences, either from YAML
    or from the factory. Takes optional overrides for both, which takes a higher precedence.

    Args:
        config: The CoreConfig instance containing preferred factory and configuration details.
        factory_path_override: A string representing the path to the factory function
            in the format of "module.submodule:factory_name".
        yaml_path_override: A string representing the path to the YAML file containing
            the Ragstack instance configuration.

    Raises:
        InvalidConfigError: If the default factory or configuration can't be found.
    """
    if yaml_path_override:
        preferences = get_config_from_yaml(yaml_path_override)
        if type_config := preferences.get(cls.configuration_key):
            return cls.subclass_from_config(ObjectConstructionConfig.model_validate(type_config))

    if factory_path_override:
        return cls.subclass_from_factory(factory_path_override)

    if preferred_factory := config.component_preference_factories.get(cls.configuration_key):
        return cls.subclass_from_factory(preferred_factory)

    if preferred_config := config.preferred_instances_config.get(cls.configuration_key):
        return cls.subclass_from_config(ObjectConstructionConfig.model_validate(preferred_config))

    raise NoPreferredConfigError(f"Could not find preferred factory or configuration for {cls.configuration_key}")

from_config classmethod #

from_config(config: dict[str, Any]) -> Self

Initializes the class with the provided configuration.

PARAMETER DESCRIPTION
config

A dictionary containing configuration details for the class.

TYPE: dict[str, Any]

RETURNS DESCRIPTION
Self

An instance of the class initialized with the provided configuration.

Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
@classmethod
def from_config(cls, config: dict[str, Any]) -> Self:
    """
    Initializes the class with the provided configuration.

    Args:
        config: A dictionary containing configuration details for the class.

    Returns:
        An instance of the class initialized with the provided configuration.
    """
    default_options = config.pop("default_options", None)
    options = cls.options_cls(**default_options) if default_options else None
    return cls(**config, default_options=options)

run async #

run(input: str | PromptInputT | None = None, options: AgentOptions[LLMClientOptionsT] | None = None, context: AgentRunContext | None = None, tool_choice: ToolChoice | None = None) -> AgentResult[PromptOutputT]

Run the agent. The method is experimental, inputs and outputs may change in the future.

PARAMETER DESCRIPTION
input

The input for the agent run. Can be: - str: A string input that will be used as user message. - PromptInputT: Structured input for use with structured prompt classes. - None: No input. Only valid when a string prompt was provided during initialization.

TYPE: str | PromptInputT | None DEFAULT: None

options

The options for the agent run.

TYPE: AgentOptions[LLMClientOptionsT] | None DEFAULT: None

context

The context for the agent run.

TYPE: AgentRunContext | None DEFAULT: None

tool_choice

Parameter that allows to control what tool is used at first call. Can be one of: - "auto": let model decide if tool call is needed - "none": do not call tool - "required: enforce tool usage (model decides which one) - Callable: one of provided tools

TYPE: ToolChoice | None DEFAULT: None

RETURNS DESCRIPTION
AgentResult[PromptOutputT]

The result of the agent run.

RAISES DESCRIPTION
AgentToolDuplicateError

If the tool names are duplicated.

AgentToolNotSupportedError

If the selected tool type is not supported.

AgentToolNotAvailableError

If the selected tool is not available.

AgentInvalidPromptInputError

If the prompt/input combination is invalid.

AgentMaxTurnsExceededError

If the maximum number of turns is exceeded.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
async def run(
    self,
    input: str | PromptInputT | None = None,
    options: AgentOptions[LLMClientOptionsT] | None = None,
    context: AgentRunContext | None = None,
    tool_choice: ToolChoice | None = None,
) -> AgentResult[PromptOutputT]:
    """
    Run the agent. The method is experimental, inputs and outputs may change in the future.

    Args:
        input: The input for the agent run. Can be:
            - str: A string input that will be used as user message.
            - PromptInputT: Structured input for use with structured prompt classes.
            - None: No input. Only valid when a string prompt was provided during initialization.
        options: The options for the agent run.
        context: The context for the agent run.
        tool_choice: Parameter that allows to control what tool is used at first call. Can be one of:
            - "auto": let model decide if tool call is needed
            - "none": do not call tool
            - "required: enforce tool usage (model decides which one)
            - Callable: one of provided tools

    Returns:
        The result of the agent run.

    Raises:
        AgentToolDuplicateError: If the tool names are duplicated.
        AgentToolNotSupportedError: If the selected tool type is not supported.
        AgentToolNotAvailableError: If the selected tool is not available.
        AgentInvalidPromptInputError: If the prompt/input combination is invalid.
        AgentMaxTurnsExceededError: If the maximum number of turns is exceeded.
    """
    if context is None:
        context = AgentRunContext()

    input = cast(PromptInputT, input)
    merged_options = (self.default_options | options) if options else self.default_options

    # Execute PRE_RUN hooks
    input = cast(
        PromptInputT,
        await self.hook_manager.execute_pre_run(
            _input=input,
            options=merged_options,
            context=context,
        ),
    )

    # Run the agent
    result = await self._run_internal(input, merged_options, context, tool_choice)

    # Execute POST_RUN hooks
    return await self.hook_manager.execute_post_run(
        result=result,
        options=merged_options,
        context=context,
    )

run_streaming #

run_streaming(input: str | PromptInputT | None = None, options: AgentOptions[LLMClientOptionsT] | None = None, context: AgentRunContext | None = None, tool_choice: ToolChoice | None = None) -> AgentResultStreaming

This method returns an AgentResultStreaming object that can be asynchronously iterated over. After the loop completes, all items are available under the same names as in AgentResult class.

PARAMETER DESCRIPTION
input

The input for the agent run.

TYPE: str | PromptInputT | None DEFAULT: None

options

The options for the agent run.

TYPE: AgentOptions[LLMClientOptionsT] | None DEFAULT: None

context

The context for the agent run.

TYPE: AgentRunContext | None DEFAULT: None

tool_choice

Parameter that allows to control what tool is used at first call. Can be one of: - "auto": let model decide if tool call is needed - "none": do not call tool - "required: enforce tool usage (model decides which one) - Callable: one of provided tools

TYPE: ToolChoice | None DEFAULT: None

RETURNS DESCRIPTION
AgentResultStreaming

A StreamingResult object for iteration and collection.

RAISES DESCRIPTION
AgentToolDuplicateError

If the tool names are duplicated.

AgentToolNotSupportedError

If the selected tool type is not supported.

AgentToolNotAvailableError

If the selected tool is not available.

AgentInvalidPromptInputError

If the prompt/input combination is invalid.

AgentMaxTurnsExceededError

If the maximum number of turns is exceeded.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def run_streaming(
    self,
    input: str | PromptInputT | None = None,
    options: AgentOptions[LLMClientOptionsT] | None = None,
    context: AgentRunContext | None = None,
    tool_choice: ToolChoice | None = None,
) -> AgentResultStreaming:
    """
    This method returns an `AgentResultStreaming` object that can be asynchronously
    iterated over. After the loop completes, all items are available under the same names as in AgentResult class.

    Args:
        input: The input for the agent run.
        options: The options for the agent run.
        context: The context for the agent run.
        tool_choice: Parameter that allows to control what tool is used at first call. Can be one of:
            - "auto": let model decide if tool call is needed
            - "none": do not call tool
            - "required: enforce tool usage (model decides which one)
            - Callable: one of provided tools

    Returns:
        A `StreamingResult` object for iteration and collection.

    Raises:
        AgentToolDuplicateError: If the tool names are duplicated.
        AgentToolNotSupportedError: If the selected tool type is not supported.
        AgentToolNotAvailableError: If the selected tool is not available.
        AgentInvalidPromptInputError: If the prompt/input combination is invalid.
        AgentMaxTurnsExceededError: If the maximum number of turns is exceeded.
    """
    if context is None:
        context = AgentRunContext()

    context.register_agent(cast(Agent[Any, Any, str], self))

    input = cast(PromptInputT, input)
    merged_options = (self.default_options | options) if options else self.default_options

    generator = self._stream_internal(
        input=input,
        options=merged_options,
        context=context,
        tool_choice=tool_choice,
    )

    # Apply ON_EVENT hooks if any registered
    if self.hook_manager.get_hooks(EventType.ON_EVENT):
        generator = self.hook_manager.execute_on_event(generator)

    # Apply POST_RUN hooks wrapper if any registered
    if self.hook_manager.get_hooks(EventType.POST_RUN):
        generator = self._run_streaming_with_hooks(generator, merged_options, context)

    return AgentResultStreaming(generator)

get_agent_card async #

get_agent_card(name: str, description: str, version: str = '0.0.0', host: str = '127.0.0.1', port: int = 8000, protocol: str = 'http', default_input_modes: list[str] | None = None, default_output_modes: list[str] | None = None, capabilities: AgentCapabilities | None = None, skills: list[AgentSkill] | None = None) -> AgentCard

Create an AgentCard that encapsulates metadata about the agent, such as its name, version, description, network location, supported input/output modes, capabilities, and skills.

PARAMETER DESCRIPTION
name

Human-readable name of the agent.

TYPE: str

description

A brief description of the agent.

TYPE: str

version

Version string of the agent. Defaults to "0.0.0".

TYPE: str DEFAULT: '0.0.0'

host

Hostname or IP where the agent will be served. Defaults to "0.0.0.0".

TYPE: str DEFAULT: '127.0.0.1'

port

Port number on which the agent listens. Defaults to 8000.

TYPE: int DEFAULT: 8000

protocol

URL scheme (e.g. "http" or "https"). Defaults to "http".

TYPE: str DEFAULT: 'http'

default_input_modes

List of input content modes supported by the agent. Defaults to ["text"].

TYPE: list[str] | None DEFAULT: None

default_output_modes

List of output content modes supported. Defaults to ["text"].

TYPE: list[str] | None DEFAULT: None

capabilities

Agent capabilities; if None, defaults to empty capabilities.

TYPE: AgentCapabilities | None DEFAULT: None

skills

List of AgentSkill objects representing the agent's skills. If None, attempts to extract skills from the agent's registered tools.

TYPE: list[AgentSkill] | None DEFAULT: None

RETURNS DESCRIPTION
AgentCard

An A2A-compliant agent descriptor including URL and capabilities.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
@requires_dependencies(["a2a.types"], "a2a")
async def get_agent_card(
    self,
    name: str,
    description: str,
    version: str = "0.0.0",
    host: str = "127.0.0.1",
    port: int = 8000,
    protocol: str = "http",
    default_input_modes: list[str] | None = None,
    default_output_modes: list[str] | None = None,
    capabilities: "AgentCapabilities | None" = None,
    skills: list["AgentSkill"] | None = None,
) -> "AgentCard":
    """
    Create an AgentCard that encapsulates metadata about the agent,
    such as its name, version, description, network location, supported input/output modes,
    capabilities, and skills.

    Args:
        name: Human-readable name of the agent.
        description: A brief description of the agent.
        version: Version string of the agent. Defaults to "0.0.0".
        host: Hostname or IP where the agent will be served. Defaults to "0.0.0.0".
        port: Port number on which the agent listens. Defaults to 8000.
        protocol: URL scheme (e.g. "http" or "https"). Defaults to "http".
        default_input_modes: List of input content modes supported by the agent. Defaults to ["text"].
        default_output_modes: List of output content modes supported. Defaults to ["text"].
        capabilities: Agent capabilities; if None, defaults to empty capabilities.
        skills: List of AgentSkill objects representing the agent's skills.
            If None, attempts to extract skills from the agent's registered tools.

    Returns:
        An A2A-compliant agent descriptor including URL and capabilities.
    """
    return AgentCard(
        name=name,
        version=version,
        description=description,
        url=f"{protocol}://{host}:{port}",
        default_input_modes=default_input_modes or ["text"],
        default_output_modes=default_output_modes or ["text"],
        skills=skills or await self._extract_agent_skills(),
        capabilities=capabilities or AgentCapabilities(),
    )

to_pydantic_ai #

to_pydantic_ai() -> Agent

Convert ragbits agent instance into a pydantic_ai.Agent representation.

RETURNS DESCRIPTION
PydanticAIAgent

The equivalent Pydantic-based agent configuration.

TYPE: Agent

RAISES DESCRIPTION
ValueError

If the prompt is not a string or a Prompt instance.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
@requires_dependencies("pydantic_ai")
def to_pydantic_ai(self) -> "PydanticAIAgent":
    """
    Convert ragbits agent instance into a `pydantic_ai.Agent` representation.

    Returns:
        PydanticAIAgent: The equivalent Pydantic-based agent configuration.

    Raises:
        ValueError: If the `prompt` is not a string or a `Prompt` instance.
    """
    mcp_servers: list[mcp.MCPServerStdio | mcp.MCPServerHTTP] = []

    if not self.prompt:
        raise ValueError("Prompt is required but was None.")

    if isinstance(self.prompt, str):
        system_prompt = self.prompt
    else:
        if not self.prompt.system_prompt:
            raise ValueError("System prompt is required but was None.")
        system_prompt = self.prompt.system_prompt

    for mcp_server in self.mcp_servers:
        if isinstance(mcp_server, MCPServerStdio):
            mcp_servers.append(
                mcp.MCPServerStdio(
                    command=mcp_server.params.command, args=mcp_server.params.args, env=mcp_server.params.env
                )
            )
        elif isinstance(mcp_server, MCPServerStreamableHttp):
            timeout = mcp_server.params["timeout"]
            sse_timeout = mcp_server.params["sse_read_timeout"]

            mcp_servers.append(
                mcp.MCPServerHTTP(
                    url=mcp_server.params["url"],
                    headers=mcp_server.params["headers"],
                    timeout=timeout.total_seconds() if isinstance(timeout, timedelta) else timeout,
                    sse_read_timeout=sse_timeout.total_seconds()
                    if isinstance(sse_timeout, timedelta)
                    else sse_timeout,
                )
            )
    return PydanticAIAgent(
        model=self.llm.model_name,
        system_prompt=system_prompt,
        tools=[tool.to_pydantic_ai() for tool in self.tools],
        mcp_servers=mcp_servers,
    )

from_pydantic_ai classmethod #

from_pydantic_ai(pydantic_ai_agent: Agent) -> Self

Construct an agent instance from a pydantic_ai.Agent representation.

PARAMETER DESCRIPTION
pydantic_ai_agent

A Pydantic-based agent configuration.

TYPE: Agent

RETURNS DESCRIPTION
Self

An instance of the agent class initialized from the Pydantic representation.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
@classmethod
@requires_dependencies("pydantic_ai")
def from_pydantic_ai(cls, pydantic_ai_agent: "PydanticAIAgent") -> Self:
    """
    Construct an agent instance from a `pydantic_ai.Agent` representation.

    Args:
        pydantic_ai_agent: A Pydantic-based agent configuration.

    Returns:
        An instance of the agent class initialized from the Pydantic representation.
    """
    mcp_servers: list[MCPServerStdio | MCPServerStreamableHttp] = []
    for mcp_server in pydantic_ai_agent._mcp_servers:
        if isinstance(mcp_server, mcp.MCPServerStdio):
            mcp_servers.append(
                MCPServerStdio(
                    params={
                        "command": mcp_server.command,
                        "args": list(mcp_server.args),
                        "env": mcp_server.env or {},
                    }
                )
            )
        elif isinstance(mcp_server, mcp.MCPServerHTTP):
            headers = mcp_server.headers or {}

            mcp_servers.append(
                MCPServerStreamableHttp(
                    params={
                        "url": mcp_server.url,
                        "headers": {str(k): str(v) for k, v in headers.items()},
                        "sse_read_timeout": mcp_server.sse_read_timeout,
                        "timeout": mcp_server.timeout,
                    }
                )
            )

    if not pydantic_ai_agent.model:
        raise ValueError("Missing LLM in `pydantic_ai.Agent` instance")
    elif isinstance(pydantic_ai_agent.model, str):
        model_name = pydantic_ai_agent.model
    else:
        model_name = pydantic_ai_agent.model.model_name

    return cls(
        llm=LiteLLM(model_name=model_name),  # type: ignore[arg-type]
        prompt="\n".join(pydantic_ai_agent._system_prompts),
        tools=[tool.function for _, tool in pydantic_ai_agent._function_tools.items()],
        mcp_servers=cast(list[MCPServer], mcp_servers),
    )

to_tool #

to_tool(name: str | None = None, description: str | None = None) -> Tool

Convert the agent into a Tool instance.

PARAMETER DESCRIPTION
name

Optional override for the tool name.

TYPE: str | None DEFAULT: None

description

Optional override for the tool description.

TYPE: str | None DEFAULT: None

RETURNS DESCRIPTION
Tool

Tool instance representing the agent.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def to_tool(self, name: str | None = None, description: str | None = None) -> Tool:
    """
    Convert the agent into a Tool instance.

    Args:
        name: Optional override for the tool name.
        description: Optional override for the tool description.

    Returns:
        Tool instance representing the agent.
    """
    return Tool.from_agent(self, name=name or self.name, description=description or self.description)

prompt_config staticmethod #

prompt_config(input_model: type[_Input], output_model: type[_Output] | type[NotGiven] = NotGiven) -> Callable[[type[Any]], type[Agent[LLMOptions, _Input, _Output]]]

Decorator to bind both input and output types of an Agent subclass, with runtime checks.

RAISES DESCRIPTION
TypeError

if the decorated class is not a subclass of Agent, or if input_model is not a Pydantic BaseModel.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
@staticmethod
def prompt_config(
    input_model: type[_Input],
    output_model: type[_Output] | type[NotGiven] = NotGiven,
) -> Callable[[type[Any]], type["Agent[LLMOptions, _Input, _Output]"]]:
    """
    Decorator to bind both input and output types of an Agent subclass, with runtime checks.

    Raises:
        TypeError: if the decorated class is not a subclass of Agent,
                   or if input_model is not a Pydantic BaseModel.
    """
    if not isinstance(input_model, type) or not issubclass(input_model, BaseModel):
        raise TypeError(f"input_model must be a subclass of pydantic.BaseModel, got {input_model}")

    if not isinstance(output_model, type):
        raise TypeError(f"output_model must be a type, got {output_model}")

    def decorator(cls: type[Any]) -> type["Agent[LLMOptions, _Input, _Output]"]:
        if not isinstance(cls, type) or not issubclass(cls, Agent):
            raise TypeError(f"Can only decorate subclasses of Agent, got {cls}")

        cls.input_type = input_model
        cls.prompt_cls = Agent._make_prompt_class_for_agent_subclass(cls)

        return cast(type["Agent[LLMOptions, _Input, _Output]"], cls)

    return decorator

ragbits.agents.AgentResult dataclass #

AgentResult(content: PromptOutputT, metadata: dict, history: ChatFormat, tool_calls: list[ToolCallResult] | None = None, usage: Usage = Field(default_factory=Usage), reasoning_traces: list[str] | None = None)

Bases: Generic[PromptOutputT]

Result of the agent run.

content instance-attribute #

content: PromptOutputT

The output content of the agent.

metadata instance-attribute #

metadata: dict

The additional data returned by the agent.

history instance-attribute #

history: ChatFormat

The history of the agent.

tool_calls class-attribute instance-attribute #

tool_calls: list[ToolCallResult] | None = None

Tool calls run by the agent.

usage class-attribute instance-attribute #

usage: Usage = Field(default_factory=Usage)

The token usage of the agent run.

reasoning_traces class-attribute instance-attribute #

reasoning_traces: list[str] | None = None

Reasoning traces from the agent run (only if log_reasoning is enabled).

ragbits.agents.AgentResultStreaming #

AgentResultStreaming(generator: AsyncGenerator[str | ToolCall | ToolCallResult | ToolEvent | DownstreamAgentResult | SimpleNamespace | BasePrompt | Usage | ConfirmationRequest, None])

Bases: AsyncIterator[str | ToolCall | ToolCallResult | ToolEvent | BasePrompt | Usage | SimpleNamespace | DownstreamAgentResult | ConfirmationRequest]

An async iterator that will collect all yielded items by LLM.generate_streaming(). This object is returned by run_streaming. It can be used in an async for loop to process items as they arrive. After the loop completes, all items are available under the same names as in AgentResult class.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def __init__(
    self,
    generator: AsyncGenerator[
        str
        | ToolCall
        | ToolCallResult
        | ToolEvent
        | DownstreamAgentResult
        | SimpleNamespace
        | BasePrompt
        | Usage
        | ConfirmationRequest,
        None,
    ],
):
    self._generator = generator
    self.content: str = ""
    self.tool_calls: list[ToolCallResult] | None = None
    self.tool_events: list[Any] | None = None
    self.downstream: dict[str | None, list[str | ToolCall | ToolCallResult]] = {}
    self.metadata: dict = {}
    self.history: ChatFormat
    self.usage: Usage = Usage()

content instance-attribute #

content: str = ''

tool_calls instance-attribute #

tool_calls: list[ToolCallResult] | None = None

tool_events instance-attribute #

tool_events: list[Any] | None = None

downstream instance-attribute #

downstream: dict[str | None, list[str | ToolCall | ToolCallResult]] = {}

metadata instance-attribute #

metadata: dict = {}

history instance-attribute #

history: ChatFormat

usage instance-attribute #

usage: Usage = Usage()

ragbits.agents.AgentRunContext #

Bases: BaseModel, Generic[DepsT]

Context for the agent run.

model_config class-attribute instance-attribute #

model_config = {'arbitrary_types_allowed': True}

deps class-attribute instance-attribute #

deps: AgentDependencies[DepsT] = Field(default_factory=lambda: AgentDependencies())

Container for external dependencies.

usage class-attribute instance-attribute #

usage: Usage = Field(default_factory=Usage)

The usage of the agent.

stream_downstream_events class-attribute instance-attribute #

stream_downstream_events: bool = False

Whether to stream events from downstream agents when tools execute other agents.

downstream_agents class-attribute instance-attribute #

downstream_agents: dict[str, Agent] = Field(default_factory=dict)

Registry of all agents that participated in this run

tool_confirmations class-attribute instance-attribute #

tool_confirmations: list[dict[str, Any]] = Field(default_factory=list, description="List of confirmed/declined tool executions. Each entry has 'confirmation_id' and 'confirmed' (bool)")

register_agent #

register_agent(agent: Agent) -> None

Register a downstream agent in this context.

PARAMETER DESCRIPTION
agent

The agent instance to register.

TYPE: Agent

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def register_agent(self, agent: "Agent") -> None:
    """
    Register a downstream agent in this context.

    Args:
        agent: The agent instance to register.
    """
    self.downstream_agents[agent.id] = agent

get_agent #

get_agent(agent_id: str) -> Agent | None

Retrieve a registered downstream agent by its ID.

PARAMETER DESCRIPTION
agent_id

The unique identifier of the agent.

TYPE: str

RETURNS DESCRIPTION
Agent | None

The Agent instance if found, otherwise None.

Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
def get_agent(self, agent_id: str) -> "Agent | None":
    """
    Retrieve a registered downstream agent by its ID.

    Args:
        agent_id: The unique identifier of the agent.

    Returns:
        The Agent instance if found, otherwise None.
    """
    return self.downstream_agents.get(agent_id)

ragbits.agents.a2a.server.create_agent_server #

create_agent_server(agent: Agent, agent_card: AgentCard, input_model: type[BaseModel]) -> Server

Create a Uvicorn server instance that serves the specified agent over HTTP.

The server's host and port are extracted from the URL in the given agent_card.

PARAMETER DESCRIPTION
agent

The Ragbits Agent instance to serve.

TYPE: Agent

agent_card

Metadata for the agent, including its URL.

TYPE: AgentCard

input_model

A Pydantic model class used to validate incoming request data.

TYPE: type[BaseModel]

RETURNS DESCRIPTION
Server

A configured uvicorn.Server instance ready to be started.

RAISES DESCRIPTION
ValueError

If the URL in agent_card does not contain a valid hostname or port.

Source code in packages/ragbits-agents/src/ragbits/agents/a2a/server.py
def create_agent_server(
    agent: Agent,
    agent_card: "AgentCard",
    input_model: type[BaseModel],
) -> "uvicorn.Server":
    """
    Create a Uvicorn server instance that serves the specified agent over HTTP.

    The server's host and port are extracted from the URL in the given agent_card.

    Args:
        agent: The Ragbits Agent instance to serve.
        agent_card: Metadata for the agent, including its URL.
        input_model: A Pydantic model class used to validate incoming request data.

    Returns:
        A configured uvicorn.Server instance ready to be started.

    Raises:
        ValueError: If the URL in agent_card does not contain a valid hostname or port.
    """
    app = create_agent_app(agent=agent, agent_card=agent_card, input_model=input_model)
    url = urlparse(agent_card.url)

    if not url.hostname:
        raise ValueError(f"Could not parse hostname from URL: {agent_card.url}")
    if not url.port:
        raise ValueError(f"Could not parse port from URL: {agent_card.url}")

    config = uvicorn.Config(app=app, host=url.hostname, port=url.port)
    server = uvicorn.Server(config=config)

    return server

ragbits.agents.hooks.Hook #

Hook(event_type: EventType, callback: CallbackT, tool_names: list[str] | None = None, priority: int = 100)

Bases: Generic[CallbackT]

A hook that intercepts execution at various lifecycle points.

Hooks allow you to: - Validate inputs before execution (pre hooks) - Control access (pre hooks) - Modify inputs (pre hooks) - Deny execution (pre hooks) - Modify outputs (post hooks) - Handle errors (post hooks)

ATTRIBUTE DESCRIPTION
event_type

The type of event (e.g., PRE_TOOL, POST_TOOL)

callback

The async function to call when the event is triggered

TYPE: CallbackT

tool_names

List of tool names this hook applies to. If None, applies to all tools.

priority

Execution priority (lower numbers execute first, default: 100)

Example
from ragbits.agents.hooks import Hook, EventType
from ragbits.core.llms.base import ToolCall


async def validate_input(tool_call: ToolCall) -> ToolCall:
    if tool_call.name == "dangerous_tool":
        return tool_call.model_copy(update={"decision": "deny", "reason": "Not allowed"})
    return tool_call


hook = Hook(event_type=EventType.PRE_TOOL, callback=validate_input, tool_names=["dangerous_tool"], priority=10)

Initialize a hook.

PARAMETER DESCRIPTION
event_type

The type of event (e.g., PRE_TOOL, POST_TOOL)

TYPE: EventType

callback

The async function to call when the event is triggered

TYPE: CallbackT

tool_names

List of tool names this hook applies to. If None, applies to all tools.

TYPE: list[str] | None DEFAULT: None

priority

Execution priority (lower numbers execute first, default: 100)

TYPE: int DEFAULT: 100

Source code in packages/ragbits-agents/src/ragbits/agents/hooks/base.py
def __init__(
    self,
    event_type: EventType,
    callback: CallbackT,
    tool_names: list[str] | None = None,
    priority: int = 100,
) -> None:
    """
    Initialize a hook.

    Args:
        event_type: The type of event (e.g., PRE_TOOL, POST_TOOL)
        callback: The async function to call when the event is triggered
        tool_names: List of tool names this hook applies to. If None, applies to all tools.
        priority: Execution priority (lower numbers execute first, default: 100)
    """
    self.event_type = event_type
    self.callback: CallbackT = callback
    self.tool_names = tool_names
    self.priority = priority

event_type instance-attribute #

event_type = event_type

callback instance-attribute #

callback: CallbackT = callback

tool_names instance-attribute #

tool_names = tool_names

priority instance-attribute #

priority = priority

matches_tool #

matches_tool(tool_name: str) -> bool

Check if this hook applies to the given tool name.

PARAMETER DESCRIPTION
tool_name

The name of the tool to check

TYPE: str

RETURNS DESCRIPTION
bool

True if this hook should be executed for the given tool

Source code in packages/ragbits-agents/src/ragbits/agents/hooks/base.py
def matches_tool(self, tool_name: str) -> bool:
    """
    Check if this hook applies to the given tool name.

    Args:
        tool_name: The name of the tool to check

    Returns:
        True if this hook should be executed for the given tool
    """
    if self.tool_names is None:
        return True
    return tool_name in self.tool_names

ragbits.agents.hooks.EventType #

Bases: str, Enum

Types of events that can be hooked.

ATTRIBUTE DESCRIPTION
PRE_TOOL

Triggered before a tool is invoked

POST_TOOL

Triggered after a tool completes

PRE_RUN

Triggered before the agent run starts

POST_RUN

Triggered after the agent run completes

ON_EVENT

Triggered for each streaming event

PRE_TOOL class-attribute instance-attribute #

PRE_TOOL = 'pre_tool'

POST_TOOL class-attribute instance-attribute #

POST_TOOL = 'post_tool'

PRE_RUN class-attribute instance-attribute #

PRE_RUN = 'pre_run'

POST_RUN class-attribute instance-attribute #

POST_RUN = 'post_run'

ON_EVENT class-attribute instance-attribute #

ON_EVENT = 'on_event'