Session Management

Use BasicSession to keep conversation state, compact history, and serialize sessions for audits or continuation.

Quick Start

from abstractcore import create_llm, BasicSession

llm = create_llm("openai", model="gpt-4o-mini")
session = BasicSession(
    provider=llm,
    system_prompt="You are a helpful assistant."
)

print(session.generate("Give me 3 bakery name ideas.").content)
print(session.generate("Pick the best one and explain why.").content)

generate() vs add_message()

Use generate() for normal chat (it appends your message, calls the model, appends the assistant reply). Use add_message() when you need manual control over history (e.g. injecting system/tool messages).

# Normal conversation (recommended)
resp = session.generate("What is Python?", name="alice")
print(resp.content)

# Manual history edits
session.add_message("system", "You are terse.")
session.add_message("user", "Hello!", name="alice")
session.add_message("assistant", "Hi.")

Compaction (Keep Context, Reduce Tokens)

Compaction summarizes older messages while preserving recent turns. You can do it manually (force_compact) or enable auto-compaction when the session grows beyond a token threshold.

# Manual compaction (in-place)
session.force_compact(preserve_recent=6, focus="key decisions")

# Automatic compaction
session.enable_auto_compact(threshold=6000)  # tokens (estimate)

Optional Analytics

Generate a summary, assessment, or extracted facts and store them on the session.

summary = session.generate_summary(focus="key outcomes")
assessment = session.generate_assessment(criteria={"clarity": True, "helpfulness": True})
facts = session.extract_facts(output_format="triples")  # or "jsonld"

Serialization (Save / Load)

Sessions serialize to a versioned JSON schema (session-archive/v1). Providers/tools are not serialized (they may contain non-serializable state), so you pass them back when loading.

# Save (includes analytics only if generated)
session.save("conversation.json")

# Load (provide provider + tools again)
llm = create_llm("openai", model="gpt-4o-mini")
loaded = BasicSession.load("conversation.json", provider=llm)

print(loaded.generate("Continue the conversation.").content)

Related Documentation

Getting Started

First calls + core concepts

Token Management

Budgets, limits, and estimates

API Reference

Full Python API

Structured Output

Typed responses with Pydantic