Try it now — no signup required
See how QLM optimizes assessment delivery compared to traditional sequential testing.
{
"item_bank_id": "ib_demo_001",
"respondent_profile": {
"skill_levels": [0.5, -0.2, 0.8]
},
"n_items": 15
}
Six products. One engine.
Assessment Engine
Select the right questions for each learner. Send a list of candidate questions and a learner profile. QLM returns the best questions to administer next, ranked by value. One API call, one result — no session state to manage.
WHAT YOU SEND
{
"domain": "education",
"learner": {
"learner_id": "student_4821",
"skill_level": 0.3,
"mastery_profile": {
"algebra": 0.6,
"geometry": 0.1,
"statistics": 0.4
}
},
"candidate_items": [
{"item_id": "q_101", "difficulty": 0.45, "domain": "algebra", "content_tags": ["quadratic"]},
{"item_id": "q_102", "difficulty": 0.72, "domain": "geometry", "content_tags": ["circles"]},
{"item_id": "q_103", "difficulty": 0.38, "domain": "statistics", "content_tags": ["regression"]},
{"item_id": "q_104", "difficulty": 0.61, "domain": "algebra", "content_tags": ["linear"]},
{"item_id": "q_105", "difficulty": 0.55, "domain": "geometry", "content_tags": ["triangles"]}
],
"num_select": 2,
"objective": "accuracy"
}
WHAT YOU GET BACK
{
"job_id": "a3f1c8e2-7b4d-4e9a-b5c6-8d2f1a3e7b9c",
"status": "completed",
"selected_items": [
{"item_id": "q_102", "rank": 1},
{"item_id": "q_103", "rank": 2}
],
"timestamp": "2026-03-26T14:32:00.123Z"
}
HOW IT WORKS
You describe the learner
skill_level is an overall level (0 = average). mastery_profile breaks that down by dimension — the learner is stronger in algebra (0.6) than geometry (0.1). You can send one or both.
You describe the items
Each item has a difficulty (0–1 scale), a domain it belongs to, and optional content_tags for constraint control. The engine uses these to find the most valuable combination.
QLM picks the best set
Unlike sequential methods that pick one item at a time, QLM evaluates all candidates together and returns the num_select items that will yield the most precise measurement when combined.
You get a ranked list
The response is a ranked list of item IDs. rank: 1 is the highest-value item. Use the job_id to retrieve the result later via GET /v1/optimize/{job_id}.
ALSO AVAILABLE
- Batch mode —
POST /v1/optimize/batchaccepts up to 100 requests in one call. Useful for pre-computing item sequences or running simulations. - Objective modes —
"accuracy"maximizes measurement precision."speed"prioritizes faster completion."balanced"blends both. - Job retrieval —
GET /v1/optimize/{job_id}retrieves a previously submitted result by ID.
Diagnostic Service
See who's struggling before they fail. A stateful, multi-step personalized session. You create a session, submit responses one at a time, and the engine selects the next best question after each response. The session ends automatically when measurement confidence is reached — or you can end it early.
STEP 1 — CREATE A SESSION
{
"item_bank_id": "ib_clinical_ed_workup",
"external_user_id": "patient_7732",
"config": {
"select_k": 1,
"confidence_threshold": 0.90
}
}
{
"session_id": "sess_8f2a1b3c-4d5e-6f7a-8b9c-0d1e2f3a4b5c",
"first_items": [
{"item_id": "troponin_hs", "external_id": "LAB-TROP-001"}
],
"state": {
"dimensions": ["cardiac", "inflammatory", "metabolic"],
"estimates": {"cardiac": 0.0, "inflammatory": 0.0, "metabolic": 0.0}
},
"config": {"select_k": 1, "confidence_threshold": 0.90}
}
STEP 2 — SUBMIT A RESPONSE, GET NEXT ITEM
{
"item_id": "troponin_hs",
"response": 1,
"time_ms": 8400
}
{
"next_items": [
{"item_id": "d_dimer", "external_id": "LAB-DDIM-001"}
],
"state": {
"dimensions": ["cardiac", "inflammatory", "metabolic"],
"estimates": {"cardiac": 0.31, "inflammatory": 0.0, "metabolic": 0.0}
},
"confidence": {
"level": 0.42,
"reached": false,
"phase": "exploring",
"questions_asked": 1
}
}
STEP 3 — REPEAT UNTIL CONVERGED
{
"next_items": null,
"state": {
"dimensions": ["cardiac", "inflammatory", "metabolic"],
"estimates": {"cardiac": 0.72, "inflammatory": -0.15, "metabolic": 0.41}
},
"confidence": {
"level": 0.94,
"reached": true,
"phase": "confirming",
"questions_asked": 6
}
}
HOW IT WORKS
You upload an item bank
An item bank is a collection of items (questions, tests, probes) with difficulty levels and dimension tags. Upload once via POST /v1/item-banks, then reference it by ID in every session.
Sessions track state
Each session maintains a per-dimension estimate that updates after every response. The estimates object shows the current levels — positive values indicate above-average, negative below-average.
Response quality matters
Send optional time_ms, used_hint, and answer_changes fields. The engine uses these to weight each response appropriately — rushed or uncertain responses carry less weight.
Smart stopping
The confidence.level climbs with each response. When it crosses your threshold (default 0.90), the session completes automatically and next_items returns null. No more questions needed.
ALSO AVAILABLE
- Warm start — returning users start from their previous estimates, reaching precision even faster.
- Event history —
GET /v1/sessions/{id}/eventsreturns the full sequence of items presented, responses submitted, and mode changes. - Early termination —
DELETE /v1/sessions/{id}ends a session early and returns the best estimates available so far. - 10 domains — clinical, cybersecurity, financial, education, talent, pharmaceutical, manufacturing, fraud, compliance, and more. Each domain uses specialized item structures through
/v1/diagnostics/{domain}/sessions. - Multi-select — set
select_kto 2–10 to receive multiple items per step, useful for parallel testing or panel-style assessments.
Curriculum Sequencer
Order your curriculum for best results. Send a curriculum with prerequisites — get the best sequence for each learner.
{
"curriculum": [
{"item_id": "mod_01", "title": "Intro to Algebra", "skills": ["algebra"], "prerequisites": [], "difficulty": 0.3, "estimated_minutes": 45}
],
"learner_state": {"algebra": 0.2, "geometry": 0.5},
"optimize_for": "efficiency"
}
Tutoring Intelligence
AI-powered explanations. Analyze student responses, generate hints, diagnose misconceptions.
{
"item_id": "q_101",
"correct_answer": "x = 4",
"student_answer": "x = -4",
"skill_id": "algebra",
"difficulty": 0.45
}
Longitudinal Analytics
See who's struggling before they fail. Track mastery over time with trajectories, predictions, at-risk detection, and cohort comparison.
{
"trajectory": [
{"date": "2026-03-01", "skills": {"algebra": 0.4}, "confident": false},
{"date": "2026-03-15", "skills": {"algebra": 0.7}, "confident": true}
],
"skill_trends": {"algebra": "improving"}
}
Content Calibration
Improve your question bank automatically. Submit your question bank — get it reviewed, tagged, and calibrated by domain experts.
{
"bank_id": "ib_my_items",
"priority": "standard",
"calibration_types": ["review", "tag", "bias", "difficulty"]
}
SDKs
Python and JavaScript clients that wrap both products. Install, configure your API key, and make calls with typed models instead of raw HTTP.
PYTHON
from qlm import QLMClient
client = QLMClient(api_key="qlm_sk_...")
# Assessment Engine
result = client.optimize(
domain="education",
items=my_items,
learner={"learner_id": "s_01", "skill_level": 0.3},
num_select=3,
)
for item in result.selected_items:
print(f"{item.rank}. {item.item_id}")
# Diagnostic Service
session = client.create_session(
item_bank_id="ib_001",
external_user_id="student_01",
)
print(session["first_items"])
JAVASCRIPT
import { QLMClient } from '@qlm/sdk';
const client = new QLMClient({ apiKey: 'qlm_sk_...' });
// Assessment Engine
const result = await client.optimize({
domain: 'education',
items: myItems,
learner: { learnerId: 's_01', skillLevel: 0.3 },
numSelect: 3,
});
result.selectedItems.forEach(item =>
console.log(`${item.rank}. ${item.itemId}`)
);
// Diagnostic Service
const session = await client.createSession({
itemBankId: 'ib_001',
externalUserId: 'student_01',
});
Ready to start?
Get your API key and make your first call in 5 minutes.
Get API Key →Or get notified when new features and domains launch.
No spam. Product updates only.