API Reference¶
preceptron.score¶
Score a clinical response using an LLM judge.
from preceptron import score
result = score(
task="cpc_bond",
response="...",
final_diagnosis="...",
model="gpt-4o",
client=client,
)
Parameters¶
| Parameter | Type | Required | Description |
|---|---|---|---|
task |
str |
Yes | One of: management_reasoning, diagnostic_reasoning, r_idea, cpc_bond, cpc_management |
response |
str |
Yes | The clinical response to score |
client |
OpenAI or Anthropic |
Yes | An initialized API client |
model |
str |
Yes | Model identifier (e.g. "gpt-4o", "claude-sonnet-4-20250514") |
rubric |
dict, list, or str |
Varies | Required for management_reasoning. Optional override for other tasks |
case_vignette |
str |
Varies | Required for management_reasoning and diagnostic_reasoning |
question_text |
str |
Varies | Required for management_reasoning, diagnostic_reasoning, and r_idea |
final_diagnosis |
str |
Varies | Required for diagnostic_reasoning and cpc_bond |
test_plan |
str |
Varies | Required for cpc_management |
Returns¶
{
"score": int | None, # numeric score
"justification": str | None, # LLM's explanation
"raw": str, # full LLM response
}
Preset Rubrics¶
Tasks with a preset rubric (cpc_bond, cpc_management, r_idea) use it automatically. Pass rubric= to override.
Client Compatibility¶
Any OpenAI-compatible client works: