Prompt#
ragbits.core.prompt.Prompt
#
Bases: Generic[InputT, OutputT]
, BasePromptWithParser[OutputT]
Generic class for prompts. It contains the system and user prompts, and additional messages.
To create a new prompt, subclass this class and provide the system and user prompts, and optionally the input and output types. The system prompt is optional.
Source code in packages/ragbits-core/src/ragbits/core/prompt/prompt.py
few_shots
class-attribute
instance-attribute
#
rendered_system_prompt
instance-attribute
#
rendered_system_prompt = _render_template(system_prompt_template, input_data) if system_prompt_template else None
rendered_user_prompt
instance-attribute
#
chat
property
#
Returns the conversation in the standard OpenAI chat format.
RETURNS | DESCRIPTION |
---|---|
ChatFormat
|
A list of dictionaries, each containing the role and content of a message.
TYPE:
|
json_mode
property
#
Returns whether the prompt should be sent in JSON mode.
RETURNS | DESCRIPTION |
---|---|
bool
|
Whether the prompt should be sent in JSON mode.
TYPE:
|
add_few_shot
#
add_few_shot(user_message: str | InputT, assistant_message: str | OutputT) -> Prompt[InputT, OutputT]
Add a few-shot example to the conversation.
PARAMETER | DESCRIPTION |
---|---|
user_message |
The raw user message or input data that will be rendered using the user prompt template.
TYPE:
|
assistant_message |
The raw assistant response or output data that will be cast to a string or in case of a Pydantic model, to JSON.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
Prompt[InputT, OutputT]
|
Prompt[InputT, OutputT]: The current prompt instance in order to allow chaining. |
Source code in packages/ragbits-core/src/ragbits/core/prompt/prompt.py
list_few_shots
#
Returns the few shot examples in the standard OpenAI chat format.
RETURNS | DESCRIPTION |
---|---|
ChatFormat
|
A list of dictionaries, each containing the role and content of a message.
TYPE:
|
Source code in packages/ragbits-core/src/ragbits/core/prompt/prompt.py
list_images
#
Returns the schema of the list of images compatible with LLM APIs Returns: list of dictionaries
output_schema
#
Returns the schema of the desired output. Can be used to request structured output from the LLM API or to validate the output. Can return either a Pydantic model or a JSON schema.
RETURNS | DESCRIPTION |
---|---|
dict | type[BaseModel] | None
|
Optional[Dict | Type[BaseModel]]: The schema of the desired output or the model describing it. |
Source code in packages/ragbits-core/src/ragbits/core/prompt/prompt.py
parse_response
#
Parse the response from the LLM to the desired output type.
PARAMETER | DESCRIPTION |
---|---|
response |
The response from the LLM.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
OutputT
|
The parsed response.
TYPE:
|
RAISES | DESCRIPTION |
---|---|
ResponseParsingError
|
If the response cannot be parsed. |
Source code in packages/ragbits-core/src/ragbits/core/prompt/prompt.py
to_promptfoo
classmethod
#
Generate a prompt in the promptfoo format from a promptfoo test configuration.
PARAMETER | DESCRIPTION |
---|---|
config |
The promptfoo test configuration.
TYPE:
|
RETURNS | DESCRIPTION |
---|---|
ChatFormat
|
The prompt in the format used by promptfoo.
TYPE:
|