Agents#
ragbits.agents.AgentOptions
#
Bases: Options, Generic[LLMClientOptionsT]
Options for the agent run.
model_config
class-attribute
instance-attribute
#
llm_options
class-attribute
instance-attribute
#
The options for the LLM.
max_turns
class-attribute
instance-attribute
#
The maximum number of turns the agent can take, if NOT_GIVEN, it defaults to 10, if None, agent will run forever
max_total_tokens
class-attribute
instance-attribute
#
The maximum number of tokens the agent can use, if NOT_GIVEN or None, agent will run forever
max_prompt_tokens
class-attribute
instance-attribute
#
The maximum number of prompt tokens the agent can use, if NOT_GIVEN or None, agent will run forever
max_completion_tokens
class-attribute
instance-attribute
#
The maximum number of completion tokens the agent can use, if NOT_GIVEN or None, agent will run forever
log_reasoning
class-attribute
instance-attribute
#
Whether to log/persist reasoning traces for debugging and evaluation.
parallel_tool_calling
class-attribute
instance-attribute
#
Whether to run the tools concurrently if multiple of them are requested by an LLM. Synchronous tools will be
run in a separate thread using asyncio.to_thread
dict
#
dict() -> dict[str, Any]
Creates a dictionary representation of the Options instance. If a value is None, it will be replaced with a provider-specific not-given sentinel.
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any]
|
A dictionary representation of the Options instance. |
Source code in packages/ragbits-core/src/ragbits/core/options.py
ragbits.agents.Agent
#
Agent(llm: LLM[LLMClientOptionsT], name: str | None = None, description: str | None = None, prompt: str | type[Prompt[PromptInputT, PromptOutputT]] | Prompt[PromptInputT, PromptOutputT] | None = None, *, history: ChatFormat | None = None, keep_history: bool = False, tools: list[Callable | Tool | Agent] | None = None, mcp_servers: list[MCPServer] | None = None, hooks: list[Hook] | None = None, default_options: AgentOptions[LLMClientOptionsT] | None = None)
Bases: ConfigurableComponent[AgentOptions[LLMClientOptionsT]], Generic[LLMClientOptionsT, PromptInputT, PromptOutputT]
Agent class that orchestrates the LLM and the prompt, and can call tools.
Current implementation is highly experimental, and the API is subject to change.
Initialize the agent instance.
| PARAMETER | DESCRIPTION |
|---|---|
llm |
The LLM to run the agent.
TYPE:
|
name |
Optional name of the agent. Used to identify the agent instance.
TYPE:
|
description |
Optional description of the agent.
TYPE:
|
prompt |
The prompt for the agent. Can be: - str: A string prompt that will be used as system message when combined with string input, or as the user message when no input is provided during run(). - type[Prompt]: A structured prompt class that will be instantiated with the input. - Prompt: Already instantiated prompt instance - None: No predefined prompt. The input provided to run() will be used as the complete prompt.
TYPE:
|
history |
The history of the agent.
TYPE:
|
keep_history |
Whether to keep the history of the agent.
TYPE:
|
tools |
The tools available to the agent. Can be one of: * Callable - a function with typing of parameters and a docstring that will be sent to the LLM The output from the callable will be sent to the LLM as a result of a tool. To specify additional values to return, that are not passed to the LLM, use ToolReturn. If this callable returns a generator or async generator, the yielded values are yielded from the streaming agent as well. The exception is a ToolReturn, which is used to send the result to the LLM. The ToolReturn is expected to be yielded only once. * Agent - another instance of an Agent, with name and description * Tool - raw instance of a Tool
TYPE:
|
mcp_servers |
The MCP servers available to the agent.
TYPE:
|
hooks |
List of tool hooks to register for tool lifecycle events.
TYPE:
|
default_options |
The default options for the agent run.
TYPE:
|
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
hook_manager
instance-attribute
#
subclass_from_config
classmethod
#
Initializes the class with the provided configuration. May return a subclass of the class, if requested by the configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config |
A model containing configuration details for the class.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
An instance of the class initialized with the provided configuration. |
| RAISES | DESCRIPTION |
|---|---|
InvalidConfigError
|
The class can't be found or is not a subclass of the current class. |
Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
subclass_from_factory
classmethod
#
Creates the class using the provided factory function. May return a subclass of the class, if requested by the factory. Supports both synchronous and asynchronous factory functions.
| PARAMETER | DESCRIPTION |
|---|---|
factory_path |
A string representing the path to the factory function in the format of "module.submodule:factory_name".
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
An instance of the class initialized with the provided factory function. |
| RAISES | DESCRIPTION |
|---|---|
InvalidConfigError
|
The factory can't be found or the object returned is not a subclass of the current class. |
Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
preferred_subclass
classmethod
#
preferred_subclass(config: CoreConfig, factory_path_override: str | None = None, yaml_path_override: Path | None = None) -> Self
Tries to create an instance by looking at project's component preferences, either from YAML or from the factory. Takes optional overrides for both, which takes a higher precedence.
| PARAMETER | DESCRIPTION |
|---|---|
config |
The CoreConfig instance containing preferred factory and configuration details.
TYPE:
|
factory_path_override |
A string representing the path to the factory function in the format of "module.submodule:factory_name".
TYPE:
|
yaml_path_override |
A string representing the path to the YAML file containing the Ragstack instance configuration.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
InvalidConfigError
|
If the default factory or configuration can't be found. |
Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
from_config
classmethod
#
Initializes the class with the provided configuration.
| PARAMETER | DESCRIPTION |
|---|---|
config |
A dictionary containing configuration details for the class.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
An instance of the class initialized with the provided configuration. |
Source code in packages/ragbits-core/src/ragbits/core/utils/config_handling.py
run
async
#
run(input: str | PromptInputT | None = None, options: AgentOptions[LLMClientOptionsT] | None = None, context: AgentRunContext | None = None, tool_choice: ToolChoice | None = None) -> AgentResult[PromptOutputT]
Run the agent. The method is experimental, inputs and outputs may change in the future.
| PARAMETER | DESCRIPTION |
|---|---|
input |
The input for the agent run. Can be: - str: A string input that will be used as user message. - PromptInputT: Structured input for use with structured prompt classes. - None: No input. Only valid when a string prompt was provided during initialization.
TYPE:
|
options |
The options for the agent run.
TYPE:
|
context |
The context for the agent run.
TYPE:
|
tool_choice |
Parameter that allows to control what tool is used at first call. Can be one of: - "auto": let model decide if tool call is needed - "none": do not call tool - "required: enforce tool usage (model decides which one) - Callable: one of provided tools
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AgentResult[PromptOutputT]
|
The result of the agent run. |
| RAISES | DESCRIPTION |
|---|---|
AgentToolDuplicateError
|
If the tool names are duplicated. |
AgentToolNotSupportedError
|
If the selected tool type is not supported. |
AgentToolNotAvailableError
|
If the selected tool is not available. |
AgentInvalidPromptInputError
|
If the prompt/input combination is invalid. |
AgentMaxTurnsExceededError
|
If the maximum number of turns is exceeded. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
run_streaming
#
run_streaming(input: str | PromptInputT | None = None, options: AgentOptions[LLMClientOptionsT] | None = None, context: AgentRunContext | None = None, tool_choice: ToolChoice | None = None) -> AgentResultStreaming
This method returns an AgentResultStreaming object that can be asynchronously
iterated over. After the loop completes, all items are available under the same names as in AgentResult class.
| PARAMETER | DESCRIPTION |
|---|---|
input |
The input for the agent run.
TYPE:
|
options |
The options for the agent run.
TYPE:
|
context |
The context for the agent run.
TYPE:
|
tool_choice |
Parameter that allows to control what tool is used at first call. Can be one of: - "auto": let model decide if tool call is needed - "none": do not call tool - "required: enforce tool usage (model decides which one) - Callable: one of provided tools
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AgentResultStreaming
|
A |
| RAISES | DESCRIPTION |
|---|---|
AgentToolDuplicateError
|
If the tool names are duplicated. |
AgentToolNotSupportedError
|
If the selected tool type is not supported. |
AgentToolNotAvailableError
|
If the selected tool is not available. |
AgentInvalidPromptInputError
|
If the prompt/input combination is invalid. |
AgentMaxTurnsExceededError
|
If the maximum number of turns is exceeded. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
get_agent_card
async
#
get_agent_card(name: str, description: str, version: str = '0.0.0', host: str = '127.0.0.1', port: int = 8000, protocol: str = 'http', default_input_modes: list[str] | None = None, default_output_modes: list[str] | None = None, capabilities: AgentCapabilities | None = None, skills: list[AgentSkill] | None = None) -> AgentCard
Create an AgentCard that encapsulates metadata about the agent, such as its name, version, description, network location, supported input/output modes, capabilities, and skills.
| PARAMETER | DESCRIPTION |
|---|---|
name |
Human-readable name of the agent.
TYPE:
|
description |
A brief description of the agent.
TYPE:
|
version |
Version string of the agent. Defaults to "0.0.0".
TYPE:
|
host |
Hostname or IP where the agent will be served. Defaults to "0.0.0.0".
TYPE:
|
port |
Port number on which the agent listens. Defaults to 8000.
TYPE:
|
protocol |
URL scheme (e.g. "http" or "https"). Defaults to "http".
TYPE:
|
default_input_modes |
List of input content modes supported by the agent. Defaults to ["text"].
TYPE:
|
default_output_modes |
List of output content modes supported. Defaults to ["text"].
TYPE:
|
capabilities |
Agent capabilities; if None, defaults to empty capabilities.
TYPE:
|
skills |
List of AgentSkill objects representing the agent's skills. If None, attempts to extract skills from the agent's registered tools.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
AgentCard
|
An A2A-compliant agent descriptor including URL and capabilities. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
to_pydantic_ai
#
Convert ragbits agent instance into a pydantic_ai.Agent representation.
| RETURNS | DESCRIPTION |
|---|---|
PydanticAIAgent
|
The equivalent Pydantic-based agent configuration.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
from_pydantic_ai
classmethod
#
Construct an agent instance from a pydantic_ai.Agent representation.
| PARAMETER | DESCRIPTION |
|---|---|
pydantic_ai_agent |
A Pydantic-based agent configuration.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Self
|
An instance of the agent class initialized from the Pydantic representation. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
to_tool
#
Convert the agent into a Tool instance.
| PARAMETER | DESCRIPTION |
|---|---|
name |
Optional override for the tool name.
TYPE:
|
description |
Optional override for the tool description.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Tool
|
Tool instance representing the agent. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
prompt_config
staticmethod
#
prompt_config(input_model: type[_Input], output_model: type[_Output] | type[NotGiven] = NotGiven) -> Callable[[type[Any]], type[Agent[LLMOptions, _Input, _Output]]]
Decorator to bind both input and output types of an Agent subclass, with runtime checks.
| RAISES | DESCRIPTION |
|---|---|
TypeError
|
if the decorated class is not a subclass of Agent, or if input_model is not a Pydantic BaseModel. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
ragbits.agents.AgentResult
dataclass
#
AgentResult(content: PromptOutputT, metadata: dict, history: ChatFormat, tool_calls: list[ToolCallResult] | None = None, usage: Usage = Field(default_factory=Usage), reasoning_traces: list[str] | None = None)
Bases: Generic[PromptOutputT]
Result of the agent run.
ragbits.agents.AgentResultStreaming
#
AgentResultStreaming(generator: AsyncGenerator[str | ToolCall | ToolCallResult | ToolEvent | DownstreamAgentResult | SimpleNamespace | BasePrompt | Usage | ConfirmationRequest, None])
Bases: AsyncIterator[str | ToolCall | ToolCallResult | ToolEvent | BasePrompt | Usage | SimpleNamespace | DownstreamAgentResult | ConfirmationRequest]
An async iterator that will collect all yielded items by LLM.generate_streaming(). This object is returned
by run_streaming. It can be used in an async for loop to process items as they arrive. After the loop completes,
all items are available under the same names as in AgentResult class.
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
downstream
instance-attribute
#
ragbits.agents.AgentRunContext
#
Bases: BaseModel, Generic[DepsT]
Context for the agent run.
deps
class-attribute
instance-attribute
#
Container for external dependencies.
usage
class-attribute
instance-attribute
#
The usage of the agent.
stream_downstream_events
class-attribute
instance-attribute
#
Whether to stream events from downstream agents when tools execute other agents.
downstream_agents
class-attribute
instance-attribute
#
downstream_agents: dict[str, Agent] = Field(default_factory=dict)
Registry of all agents that participated in this run
tool_confirmations
class-attribute
instance-attribute
#
tool_confirmations: list[dict[str, Any]] = Field(default_factory=list, description="List of confirmed/declined tool executions. Each entry has 'confirmation_id' and 'confirmed' (bool)")
register_agent
#
register_agent(agent: Agent) -> None
Register a downstream agent in this context.
| PARAMETER | DESCRIPTION |
|---|---|
agent |
The agent instance to register.
TYPE:
|
get_agent
#
get_agent(agent_id: str) -> Agent | None
Retrieve a registered downstream agent by its ID.
| PARAMETER | DESCRIPTION |
|---|---|
agent_id |
The unique identifier of the agent.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Agent | None
|
The Agent instance if found, otherwise None. |
Source code in packages/ragbits-agents/src/ragbits/agents/_main.py
ragbits.agents.a2a.server.create_agent_server
#
create_agent_server(agent: Agent, agent_card: AgentCard, input_model: type[BaseModel]) -> Server
Create a Uvicorn server instance that serves the specified agent over HTTP.
The server's host and port are extracted from the URL in the given agent_card.
| PARAMETER | DESCRIPTION |
|---|---|
agent |
The Ragbits Agent instance to serve.
TYPE:
|
agent_card |
Metadata for the agent, including its URL.
TYPE:
|
input_model |
A Pydantic model class used to validate incoming request data.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
Server
|
A configured uvicorn.Server instance ready to be started. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If the URL in agent_card does not contain a valid hostname or port. |
Source code in packages/ragbits-agents/src/ragbits/agents/a2a/server.py
ragbits.agents.hooks.Hook
#
Hook(event_type: EventType, callback: CallbackT, tool_names: list[str] | None = None, priority: int = 100)
Bases: Generic[CallbackT]
A hook that intercepts execution at various lifecycle points.
Hooks allow you to: - Validate inputs before execution (pre hooks) - Control access (pre hooks) - Modify inputs (pre hooks) - Deny execution (pre hooks) - Modify outputs (post hooks) - Handle errors (post hooks)
| ATTRIBUTE | DESCRIPTION |
|---|---|
event_type |
The type of event (e.g., PRE_TOOL, POST_TOOL)
|
callback |
The async function to call when the event is triggered
TYPE:
|
tool_names |
List of tool names this hook applies to. If None, applies to all tools.
|
priority |
Execution priority (lower numbers execute first, default: 100)
|
Example
from ragbits.agents.hooks import Hook, EventType
from ragbits.core.llms.base import ToolCall
async def validate_input(tool_call: ToolCall) -> ToolCall:
if tool_call.name == "dangerous_tool":
return tool_call.model_copy(update={"decision": "deny", "reason": "Not allowed"})
return tool_call
hook = Hook(event_type=EventType.PRE_TOOL, callback=validate_input, tool_names=["dangerous_tool"], priority=10)
Initialize a hook.
| PARAMETER | DESCRIPTION |
|---|---|
event_type |
The type of event (e.g., PRE_TOOL, POST_TOOL)
TYPE:
|
callback |
The async function to call when the event is triggered
TYPE:
|
tool_names |
List of tool names this hook applies to. If None, applies to all tools.
TYPE:
|
priority |
Execution priority (lower numbers execute first, default: 100)
TYPE:
|
Source code in packages/ragbits-agents/src/ragbits/agents/hooks/base.py
matches_tool
#
Check if this hook applies to the given tool name.
| PARAMETER | DESCRIPTION |
|---|---|
tool_name |
The name of the tool to check
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
bool
|
True if this hook should be executed for the given tool |
Source code in packages/ragbits-agents/src/ragbits/agents/hooks/base.py
ragbits.agents.hooks.EventType
#
Bases: str, Enum
Types of events that can be hooked.
| ATTRIBUTE | DESCRIPTION |
|---|---|
PRE_TOOL |
Triggered before a tool is invoked
|
POST_TOOL |
Triggered after a tool completes
|
PRE_RUN |
Triggered before the agent run starts
|
POST_RUN |
Triggered after the agent run completes
|
ON_EVENT |
Triggered for each streaming event
|