Skip to main content

Working with Prompts

Prompts are the core resource in Claro. Learn how to retrieve them, access their content, and understand their metadata.

Getting a Prompt

Retrieve a prompt using its package name:
from baytos.claro import BaytClient

client = BaytClient(api_key="your_api_key")
prompt = client.get_prompt("@workspace/my-prompt:v1")

Package Names

Prompts use a semantic naming convention:
@workspace/prompt-name:version
Components:
  • @workspace - Your workspace slug (unique identifier)
  • prompt-name - The prompt’s URL-friendly name
  • version - Version identifier (v0 for drafts, v1+ for published)
Find your prompt’s package name in the Claro dashboard under prompt settings or by clicking the “Share” button.

Version Conventions

# Get draft version (always the latest unreleased changes)
draft = client.get_prompt("@workspace/my-prompt:v0")

# Get specific published version
v1 = client.get_prompt("@workspace/my-prompt:v1")
v2 = client.get_prompt("@workspace/my-prompt:v2")

# Get latest published version
latest = client.get_prompt("@workspace/my-prompt:latest")
Version v0 is reserved for drafts. Published versions start at v1.

Prompt Content Fields

Prompts contain three types of content:

Generator Prompt

The main prompt content:
prompt = client.get_prompt("@workspace/assistant:v1")
print(prompt.generator)  # Main prompt content
Use this with your LLM provider:
from openai import OpenAI

openai_client = OpenAI()
response = openai_client.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": prompt.generator}
    ]
)

System Prompt

Optional system-level instructions:
if prompt.has_system_prompt():
    print(prompt.system)

Critique Prompt

Optional prompt for evaluating outputs:
if prompt.has_critique_prompt():
    print(prompt.critique)
    # Use this to evaluate LLM responses

Prompt Metadata

Access metadata about the prompt:
prompt = client.get_prompt("@workspace/my-prompt:v1")

# Basic info
print(f"Title: {prompt.title}")
print(f"Description: {prompt.description}")

# Package information
print(f"Workspace: {prompt.namespace}")
print(f"Slug: {prompt.slug}")
print(f"Version: {prompt.version}")
print(f"Full package name: {prompt.package_name}")

# Version status
if prompt.is_draft:
    print("This is a draft version")
else:
    print("This is a published version")

Available Metadata

PropertyTypeDescription
idstrUnique prompt identifier
titlestrHuman-readable title
description`strNone`Optional description
generatorstrMain prompt content
systemstrSystem prompt (may be empty)
critiquestrCritique prompt (may be empty)
namespacestrWorkspace slug
slugstrPrompt slug
versionstrVersion string (e.g., “v1”)
package_namestrFull package name
is_draftboolTrue if version is v0
categorystrPrompt category

Dictionary-Style Access

Prompts support both attribute and dictionary access:
prompt = client.get_prompt("@workspace/test:v1")

# Attribute access (recommended)
print(prompt.title)
print(prompt.generator)

# Dictionary access
print(prompt['title'])
print(prompt.get('description', 'No description'))

# Check if field exists
if 'system' in prompt:
    print(prompt['system'])

# Get all data as dict
data = prompt.to_dict()

Working with Multiple Prompts

Fetching Different Versions

client = BaytClient(api_key="your_api_key")

# Compare versions
v1 = client.get_prompt("@workspace/support:v1")
v2 = client.get_prompt("@workspace/support:v2")

if v1.generator != v2.generator:
    print("Content changed between versions")
    print(f"v1 length: {len(v1.generator)}")
    print(f"v2 length: {len(v2.generator)}")

Version Migration

def migrate_to_latest(package_base: str):
    """Migrate from current version to latest"""
    current = client.get_prompt(f"{package_base}:v1")
    latest = client.get_prompt(f"{package_base}:latest")

    if current.version != latest.version:
        print(f"New version available: {latest.version}")
        print(f"Title: {latest.title}")
        return latest

    return current

# Usage
prompt = migrate_to_latest("@workspace/customer-support")

Using with Different LLM Providers

OpenAI

from baytos.claro import BaytClient
from openai import OpenAI

claro = BaytClient(api_key="...")
openai = OpenAI(api_key="...")

prompt = claro.get_prompt("@workspace/assistant:v1")

response = openai.chat.completions.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": prompt.generator},
        {"role": "user", "content": "Hello!"}
    ]
)

Anthropic Claude

from baytos.claro import BaytClient
import anthropic

claro = BaytClient(api_key="...")
claude = anthropic.Anthropic(api_key="...")

prompt = claro.get_prompt("@workspace/assistant:v1")

response = claude.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    system=prompt.generator,
    messages=[
        {"role": "user", "content": "Hello!"}
    ]
)

Google Gemini

from baytos.claro import BaytClient
import google.generativeai as genai

claro = BaytClient(api_key="...")
genai.configure(api_key="...")

prompt = claro.get_prompt("@workspace/assistant:v1")
model = genai.GenerativeModel("gemini-pro")

response = model.generate_content(
    f"{prompt.generator}\n\nUser: Hello!"
)

Caching Prompts

For frequently accessed prompts, consider caching:
from functools import lru_cache
from baytos.claro import BaytClient

client = BaytClient(api_key="...")

@lru_cache(maxsize=100)
def get_cached_prompt(package_name: str):
    """Cache prompts to reduce API calls"""
    return client.get_prompt(package_name)

# First call - fetches from API
prompt1 = get_cached_prompt("@workspace/support:v1")

# Second call - returns cached version
prompt2 = get_cached_prompt("@workspace/support:v1")
Be careful with caching. If a prompt is updated, your cache may return stale data. Consider cache invalidation strategies.

Best Practices

Always use specific versions (v1, v2) in production code:
# ✅ Good: Pinned version
prompt = client.get_prompt("@workspace/support:v1")

# ❌ Risky: Latest version
prompt = client.get_prompt("@workspace/support:latest")
Using latest in production means your application behavior changes when prompts are updated.
Use v0 (draft) for development and testing:
import os

ENV = os.getenv("ENVIRONMENT", "production")

if ENV == "development":
    # Use draft in development
    prompt = client.get_prompt("@workspace/support:v0")
else:
    # Use specific version in production
    prompt = client.get_prompt("@workspace/support:v1")
Not all prompts have system or critique prompts:
# ✅ Good: Check before using
if prompt.has_system_prompt():
    system = prompt.system
else:
    system = "You are a helpful assistant."

# ✅ Good: Use .get() with default
critique = prompt.get('critique', '')

# ❌ Bad: Assume field exists
system = prompt.system  # May be empty string
Validate package names before fetching:
import re
from baytos.claro import BaytClient, BaytValidationError

def is_valid_package_name(name: str) -> bool:
    pattern = r'^@[\w-]+/[\w-]+:v\d+$'
    return bool(re.match(pattern, name))

package = "@workspace/support:v1"

if is_valid_package_name(package):
    prompt = client.get_prompt(package)
else:
    print(f"Invalid package name: {package}")

Next Steps