Meridian documentation
Meridian routes tasks to relevant skills using a two-stage pipeline: Llama-3.3-70B generates candidate skills, then an open-domain orbital classifier assigns each one a celestial class (planet · moon · trojan · asteroid · comet · irregular) and ranks them against the task. Same backend powers the browser miniapp and the npm-installable MCP.
Install
1. MCP for Claude Code, Cursor, Windsurf, etc.
npm install -g meridian-skills-mcp
claude mcp add meridian meridian-mcp
Same install works in Cursor, Windsurf, Goose, Continue, and any client that speaks stdio MCP. The package is a ~5 KB thin client that posts to the public router endpoint — no local model, no Python, internet required.
2. Browser miniapp (no install)
Open ask-meridian.uk/miniapp.
Type a task or hit the 📷 Scan an object button to use real-time camera object-detection
as the input. Same backend, fancier UI.
3. Direct HTTP
curl -X POST https://ask-meridian.uk/api/orbital-route \
-H "content-type: application/json" \
-d '{"task":"set up rate limiting on a public API","limit":5}'
Public, no key required (today). See latency & quota for the realistic shape.
Quickstart
Once the MCP is registered, your AI client auto-discovers the tool. Just ask:
Use meridian to find skills for setting up a public API rate limiter.
The client calls meridian.route_task({ task, limit }), the backend generates 5 LLM
candidates, classifies them orbitally, and returns ranked skills with full markdown bodies — which
your client lifts straight into its context window.
Pipeline
POST /api/orbital-route { task, limit }
│
↓
Cloudflare Workers AI
Llama-3.3-70B-fp8-fast
→ 5 candidate skills
{slug, description, keywords, body}
│
↓
Open-domain orbital classifier (JS)
│
─────────┴──────────
physics: mass · scope · independence ·
cross_domain · fragmentation · drag · dep_ratio
class: planet | moon | trojan | asteroid | comet | irregular
parent: nearest sibling by Jaccard
system: forge | signal | mind (term-set affinity)
lagrange: bridge potential between two strongest systems
│
↓
route_score = (kw·10 + desc·5 + body·1)
× diversity · class_boost · versatility
│
↓
ranked array
HTTP API
POST /api/orbital-route
The single canonical endpoint. Both miniapp and MCP call it.
Request body
{
"task": "string (≤ 800 chars, required)",
"limit": 5, // 1-10 (default 5)
"candidates": 5 // 3-8 (default 5; max LLM-generated before slicing)
}
Response (200)
{
"task": "...",
"note": "Fully dynamic: skills generated by ...",
"confidence": "strong | moderate | weak | none",
"top_score": 109.3,
"candidates_generated": 5,
"selected": [ /* see Response shape below */ ],
"timing": { "llm_ms": 38000, "classify_ms": 2, "total_ms": 38002 }
}
Errors
400— task missing / too long / invalid JSON.502— Workers AI failed.{ "error": "LLM call failed: …" }. Retry safe.503— AI binding not configured (only on a misconfigured deployment).
MCP tool
v1.0.0 of meridian-skills-mcp exposes one tool:
route_task(task: string, limit?: integer)
Posts to /api/orbital-route, formats the response as agent-readable markdown,
returns it in a single text content block. Each generated skill ships with its full body so the
caller LLM can read it without an extra fetch.
The earlier get_skill, list_skills, and search_skills tools
from the 0.3.x line were removed — they were a closed-domain abstraction that has no meaning
under dynamic LLM generation. See self-hosting if you want the curated
corpus path.
Response shape
Each entry in selected[]:
{
"slug": "redis-token-bucket",
"name": "redis-token-bucket",
"description": "Atomic Lua-driven token bucket in Redis for sub-millisecond rate limiting.",
"body": "## Use It For\n- ...\n\n## Workflow\n1. ...\n",
"keywords": ["redis", "lua", "token-bucket", "..."],
"route_score": 109.3,
"classification": {
"class": "trojan",
"class_scores": { "planet": 0.41, "moon": 0.07, "trojan": 0.62, "..." : 0 },
"physics": {
"mass": 0.58,
"scope": 0.71,
"independence": 0.34,
"cross_domain": 0.18,
"fragmentation": 0.12,
"drag": 0.21,
"dep_ratio": 0.64,
"lagrange_potential": 0.22,
"star_system": "forge",
"star_affinity": { "forge": 0.78, "signal": 0.06, "mind": 0.04 }
},
"parent": "api-rate-limiting",
"star_system": "forge",
"lagrange_systems": ["forge"],
"lagrange_potential": 0.22,
"decision_rule": "Companion at L4/L5 of api-rate-limiting — ...",
"habitable_zone": true,
"tidal_lock": true
},
"breakdown": {
"kw_hits": 3,
"desc_hits": 2,
"body_hits": 11,
"diversity_mult": 1.36,
"class_mult": 1.20,
"lagrange_mult": 1.11,
"tokens": ["rate", "limiting", "redis"]
},
"why": "3 keyword · 2 desc · 11 body · trojan×1.20 · ..."
}
Celestial classes
Each generated skill is assigned exactly one class — argmax over six per-class scores:
| Class | Decision rule | Score boost |
|---|---|---|
| Planet | Domain anchor — high mass × scope × independence. Loads as a primary skill. | ×1.30 |
| Trojan | Companion at L4/L5 of a parent — high dep_ratio, low fragmentation. Co-activates permanently with its parent. | ×1.20 |
| Irregular | Cross-domain bridge — spans multiple star systems with high fragmentation. | ×1.10 |
| Moon | Sub-skill orbiting a parent — low independence, dep_ratio drives loading. | ×1.05 |
| Asteroid | Narrow-scope niche tool — low mass but independently useful. | ×0.85 |
| Comet | Specialised / occasional — high drag, high cross_domain, low dep_ratio. Triggers rarely. | ×0.80 |
Physics signature
Per-skill features derived from content alone (no curated lookup tables):
mass— log(body_length) + keyword count, normalized.scope— keyword diversity + cross_domain contribution.independence— 1 − Jaccard with siblings, modulated by mass.cross_domain— Shannon entropy across forge / signal / mind term sets.fragmentation— keyword length stddev + cross_domain.drag— proportion of long / hyphenated specialised terms.dep_ratio— relatedness to siblings in the batch:max(token-Jaccard × 1.5, keyword-Jaccard × 2.2), clamped to [0,1]. Keyword overlap is the cleaner relatedness signal.lagrange_potential— boosted minimum of the two strongest system affinities (min(top₂) × 1.4, clamped to [0,1]).
Star systems
Three system term sets (lifted verbatim from skill_orbit.py) determine which gravitational
well a skill orbits:
- Forge — devops/backend:
api · docker · deploy · network · backend · nginx · ssh · ci/cd · build · server · database · redis · cache · auth · test · kubernetes · tunnel · vpn · observability - Signal — growth/marketing:
seo · serp · keyword · content · email · marketing · campaign · analytics · conversion · funnel · brand · publish · backlink · cohort · attribution · linkedin · podcast · persona - Mind — AI/research:
llm · prompt · reasoning · agent · embedding · vector · rag · evaluation · orchestration · memory · knowledge · openai · anthropic · inference · fine-tune · transcript · synthesis
Skills with strong affinity in ≥ 2 systems get a versatility boost (1 + min(0.30, lagrange_potential × 0.5)) on the route score.
Configuration
Environment variables for the MCP client:
| Variable | Default | Purpose |
|---|---|---|
MERIDIAN_API_URL |
https://ask-meridian.uk/api/orbital-route |
Override the backend (e.g. for a self-hosted Pages clone or a staging deployment). |
MERIDIAN_API_KEY |
(none) | Sent as Authorization: Bearer …. Currently unused server-side; reserved for future per-user gating. |
MERIDIAN_TIMEOUT_MS |
90000 |
Abort the fetch after this many ms. Increase if your client has a tool-call timeout shorter than 60 s. |
Latency & quota
- Per call — typical 30–50 s. Llama-3.3-70B does the bulk of the work; the JS classifier runs in < 5 ms.
- Free tier — Workers AI gives ~10 000 neurons/day account-wide. Each call costs ~150–300 neurons. Real-world budget: ~30–60 calls/day shared across all clients before the LLM step starts erroring.
- Failure mode — when quota is exhausted the function returns
502 LLM call failed. Retry after the daily reset. - No silent fallback — there is no static-corpus fallback. The MCP and miniapp surface the error directly.
Self-hosting (closed-domain, offline)
The 1.0.0 release is a thin client that requires the public backend. If you need the older local model + curated corpus path, pin to:
npm install -g meridian-skills-mcp@0.3.2
That ships:
- The
skill_orbit.pyPython orbital classifier. - An
@xenova/transformersMiniLM-L6-v2 embedding pre-filter (~23 MB ONNX model, downloaded on first call). - An 88-skill curated corpus of
SKILL.mdfiles. - Four tools:
route_task,get_skill,list_skills,search_skills.
Tradeoff: closed-domain (only the curated 88 skills classify properly), but no internet required and
routing is < 3 s. The tradeoff 1.0.0 made was the opposite — open-domain (any task), but
HTTP only and 30 – 50 s per call.