What this is
Lookout is an invented BI product (Tableau, scoped down) used as a take-home challenge to design a
realistic MCP surface. The mock holds 8 datasources, 51 workbooks, and ~390 views in SQLite — about
118k rows total, 100k of which sit in the Product Usage datasource so query-time and result-size limits
actually bite. The contract is read-only; the only side effects are mock-internal — writing a render to
fs/renders/ or a CSV to fs/exports/.
This page is the operator surface for the live deployment. The full design lives in the project's
SPEC.md (see source repo).
30-second quickstart
-
Grab a demo API token:
click to mint a tokenPublic demo credential — same privileges for everyone. Edge rate-limit: 60 req/min on/mcp, 30 req/min on/token. - Drop the JSON snippet for your agent into its config (below).
- Ask your agent "using the lookout MCP, list the datasources and tell me which one is offline."
If you'd rather hit it raw:
curl -sS https://lookermcp.georgeplaton.com/mcp \
-H "Authorization: Bearer $LOOKOUT_TOKEN" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
Configure your agent
Lookout speaks MCP JSON-RPC over HTTP. Most desktop agent UIs assume MCP servers are
local subprocesses speaking stdio — for those, the canonical bridge is the npm package
mcp-remote:
it spawns a tiny stdio→HTTP proxy that forwards every MCP frame to a remote URL. Agents that natively
speak remote MCP (Claude Code's --transport http, the Anthropic Messages API, OpenAI Agents
SDK's MCPServerStreamableHttp) skip the bridge.
Claude Desktop
Edit ~/Library/Application Support/Claude/claude_desktop_config.json on macOS
(or %APPDATA%\Claude\claude_desktop_config.json on Windows):
{
"mcpServers": {
"lookout": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://lookermcp.georgeplaton.com/mcp",
"--header", "Authorization: Bearer ${LOOKOUT_TOKEN}"
],
"env": { "LOOKOUT_TOKEN": "paste-your-token-here" }
}
}
}
Restart Claude Desktop. The 🔌 icon in the prompt should list lookout with 12 tools.
Claude Code (CLI)
claude mcp add --transport http lookout https://lookermcp.georgeplaton.com/mcp \
--header "Authorization: Bearer $LOOKOUT_TOKEN"
/mcp inside a session shows the connection state.
Cursor
Settings → MCP → "Add new MCP Server":
{
"lookout": {
"url": "https://lookermcp.georgeplaton.com/mcp",
"headers": { "Authorization": "Bearer ${LOOKOUT_TOKEN}" }
}
}
VS Code (Continue, Cline, RooCode)
{
"mcpServers": {
"lookout": {
"transport": "http",
"url": "https://lookermcp.georgeplaton.com/mcp",
"headers": { "Authorization": "Bearer YOUR_TOKEN" }
}
}
}
Anthropic Messages API
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: $ANTHROPIC_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "anthropic-beta: mcp-client-2025-04-04" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"mcp_servers": [{
"type": "url",
"url": "https://lookermcp.georgeplaton.com/mcp",
"name": "lookout",
"authorization_token": "'"$LOOKOUT_TOKEN"'"
}],
"messages": [{"role":"user","content":"List Lookout datasources and flag offline ones."}]
}'
OpenAI Agents SDK (Python)
from agents.mcp import MCPServerStreamableHttp
lookout = MCPServerStreamableHttp(
name="lookout",
params={
"url": "https://lookermcp.georgeplaton.com/mcp",
"headers": {"Authorization": f"Bearer {os.environ['LOOKOUT_TOKEN']}"},
},
)
# Agent(..., mcp_servers=[lookout])
LangChain
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"lookout": {
"url": "https://lookermcp.georgeplaton.com/mcp",
"transport": "streamable_http",
"headers": {"Authorization": f"Bearer {os.environ['LOOKOUT_TOKEN']}"},
}
})
tools = await client.get_tools()
Activities the agent can drive (W1–W8)
The brief defines eight analyst workflows. Twelve tools cover all of them. The mapping below is the agent-facing playbook — copy a sample call into a prompt, or fire the JSON-RPC directly.
| # | Workflow | Tools used |
|---|---|---|
| W1 | Discover datasources & their schemas | list_datasources → get_datasource |
| W2 | Browse workbooks & views | list_workbooks → get_workbook → list_views |
| W3 | Apply filters to an existing view | get_view → query_view_data with filterOverrides |
| W4 | Run a raw analytical SQL query | get_datasource → query_datasource |
| W5 | Compare two time periods (no compare_periods tool — by design) | two query_view_data calls with different time-window filterOverrides |
| W6 | Inspect chart spec, summary stats, preview rows | get_view → optional query_view_data |
| W7 | Render a chart as PNG/SVG with overrides | create_view_render → if status ≠ succeeded, poll get_render_job |
| W8 | Export the full data behind a chart as CSV | export_view_csv → if status ≠ completed, poll get_job |
Sample calls + response shapes
W1 — list datasources, then get full schema
// request
{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"list_datasources","arguments":{"nameContains":"sales","pageSize":20}
}}
// response (inside result.content[0].text)
{
"items":[{
"id":"ds_4hk2g8r9m1p3v7q5x6w8a2",
"name":"Sales Pipeline",
"connectionType":"postgres",
"status":"online",
"lastRefreshedAt":"2026-05-03T22:14:00Z",
"fieldCount":14
}],
"nextCursor":null,
"warnings":[]
}
{"jsonrpc":"2.0","id":2,"method":"tools/call","params":{
"name":"get_datasource","arguments":{"datasourceId":"ds_4hk2g8r9m1p3v7q5x6w8a2"}
}}
// → fields[*].dataType in {string,integer,float,boolean,date,timestamp}
// fields[*].semanticRole in {dimension,measure,time,identifier}
// warnings may include EXTRACT_STALE if past freshnessSlaSeconds.
W4 — constrained SQL aggregation
{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"query_datasource","arguments":{
"datasourceId":"ds_4hk2g8r9m1p3v7q5x6w8a2",
"sql":"SELECT stage, COUNT(*) AS n, SUM(amount_usd) AS total FROM data WHERE close_quarter='2026Q1' GROUP BY stage ORDER BY total DESC"
}
}}
// response
{
"queryId":"qry_01h8x9m3n5p7q9r2t4v6w8",
"datasourceId":"ds_4hk2g8r9m1p3v7q5x6w8a2",
"executedAt":"2026-04-22T14:32:07Z",
"columns":[
{"name":"stage","type":"string","semanticRole":"dimension","nullable":false},
{"name":"n","type":"integer","semanticRole":"measure","nullable":true},
{"name":"total","type":"float","semanticRole":"measure","nullable":true}
],
"rows":[["Negotiation",18,1840000.0],["Proposal",42,1284000.0]],
"rowCount":2,"pageSize":500,"nextCursor":null,"truncated":false,"warnings":[]
}
Rows are column-oriented positional arrays (BigQuery / Snowflake REST style), not records-by-key — saves ~40% tokens on wide results. Re-key client-side when needed. The single allowed table reference is data; the framework rewrites it to the underlying per-datasource table.
W3 + W5 — filtered view data, repeated for two periods
{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"query_view_data","arguments":{
"viewId":"vw_a1b2c3d4e5f6g7h8i9j0k1",
"filterOverrides":[
{"field":"region","op":"eq","value":"EMEA"},
{"field":"close_quarter","op":"eq","value":"2026Q1"}
],
"pageSize":200
}
}}
// response carries the resolved appliedFilters so you see what actually ran:
{
"queryId":"qry_...","viewId":"vw_...","executedAt":"...",
"appliedFilters":[
{"field":"close_quarter","op":"eq","value":"2026Q1"},
{"field":"region","op":"eq","value":"EMEA"}
],
"columns":[...],"rows":[...],"rowCount":18,"pageSize":200,
"nextCursor":null,"truncated":false,"warnings":[]
}
For W5 (compare-periods) call this twice with different time-window overrides. There is intentionally no compare_periods tool — the brief specifies the agent does the comparison client-side.
W7 — render a chart (job-shaped, often inline)
{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"create_view_render","arguments":{
"viewId":"vw_a1b2c3d4e5f6g7h8i9j0k1","format":"svg",
"filterOverrides":[{"field":"region","op":"eq","value":"EMEA"}]
}
}}
// inline-success path (small or cached render)
{
"renderJobId":"rj_z4mq6sv8c1pkw0n3xtg2hr",
"status":"succeeded",
"filePath":"fs/renders/views/vw_.../9f2c1ab847e3.svg",
"inlineBase64":"PHN2ZyB4bWxucz0i...",
"appliedFilters":[...],
"cached":true,"createdAt":"...","completedAt":"..."
}
// deferred path (large render)
{
"renderJobId":"rj_...",
"status":"running","pollAfterMs":1000,...
}
Render hash key = sha256({viewId, viewUpdatedAt, datasourceLastRefreshedAt, format, width, height, appliedFilters}). Identical inputs → identical filePath & bytes. Distinct filter sets get distinct hashes — W5 renders the same view at two windows with no collision. RENDER_TIMEOUT is reported as status: "failed" + error.code, never as an envelope error — branching is always status === "succeeded" ? use(filePath) : recover().
W8 — export full CSV (shape-stable envelope)
{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{
"name":"export_view_csv","arguments":{
"viewId":"vw_a1b2c3d4e5f6g7h8i9j0k1",
"filterOverrides":[{"field":"region","op":"eq","value":"EMEA"}]
}
}}
// every call returns the same envelope; inlineCsv is non-null iff < 256 KB
{
"jobId":"job_8a2x6w5q1y3z2c5v7n9b3p",
"status":"completed",
"resultPath":"fs/exports/views/job_.../data.csv",
"manifestPath":"fs/exports/views/job_.../manifest.json",
"format":"csv","rowCount":2189,
"inlineCsv":"stage,amount_usd\nDiscovery,84000.0\n...",
"appliedFilters":[...],"createdAt":"...","completedAt":"..."
}
Branching is inlineCsv != null ? useInline() : readFile(resultPath), never "if jobId then poll else inline". manifestPath restores the column types CSV strips, so an agent reading resultPath still gets float vs string typing without re-running query_view_data.
Failure modes worth seeing first-hand
// DATASOURCE_OFFLINE — seeded on one source so the agent meets it organically
{"name":"query_datasource","arguments":{
"datasourceId":"ds_<the_offline_one>","sql":"SELECT 1 FROM data"}}
// → { "error": { "code": "DATASOURCE_OFFLINE", "message": "...", "details": {...} } }
// SQL_FORBIDDEN_CONSTRUCT — caught pre-parse
{"name":"query_datasource","arguments":{
"datasourceId":"ds_...","sql":"DELETE FROM data; SELECT 1 FROM data"}}
// → { "error": { "code": "SQL_FORBIDDEN_CONSTRUCT", "message": "Only SELECT statements are permitted." } }
// EXTRACT_STALE — warning, not error; one source is deliberately past its SLA
{"name":"get_datasource","arguments":{"datasourceId":"ds_<the_stale_one>"}}
// → { "id":"...", "fields":[...], "warnings":[{"code":"EXTRACT_STALE","message":"..."}] }
Tool reference
Tool names on the wire are snake_case. Input keys are camelCase (pageSize, viewId, filterOverrides). Result data column names are snake_case (warehouse-style identifiers). Live machine-readable manifest with full ajv schemas: /manifest.json (ETag + max-age=300; well-behaved clients fetch it once per deploy).
| Tool | Purpose | Key inputs |
|---|---|---|
| Datasource (3) | ||
list_datasources | Discover datasources before formulating a query. | q (fuzzy), nameContains, connectionType, pageSize |
get_datasource | Full schema + freshness for one datasource. | datasourceId |
query_datasource | Constrained SELECT against a datasource. Single-statement, alias data only. Wall time 5 s, row cap 5000. | datasourceId, sql, format, pageSize, cursor |
| Workbook (2) | ||
list_workbooks | Browse workbook collections. | q, nameContains, updatedAfter, pageSize |
get_workbook | Workbook + its inline view summaries (no N+1). | workbookId |
| View (3) | ||
list_views | Find views by name, chart type, or workbook. Embeds workbook + datasource. | workbookId, chartType, q, nameContains, pageSize |
get_view | Chart spec + encoding + saved filters + summary stats + ≤50-row preview. | viewId |
query_view_data | Tabular data behind a view, with filterOverrides. Default 100 rows, max 1000. | viewId, filterOverrides, format, pageSize |
| Render (2) | ||
create_view_render | Render PNG/SVG. Job-shaped envelope; immediate success on small/cached, queued/running on heavy. | viewId, format, width, height, filterOverrides |
get_render_job | Poll a render job. Terminal failures live in status: "failed" + error. | renderJobId (rj_*) |
| Export (2) | ||
export_view_csv | Full CSV export. Shape-stable envelope; inlineCsv populated when < 256 KB. | viewId, filterOverrides |
get_job | Poll an export job. | jobId (job_*) |
Filter shape
{ "field": "region", "op": "in", "value": ["EMEA","NA"] }
// ops: eq neq in not_in gt gte lt lte between like is_null not_null
// override-merge: filterOverrides REPLACE saved filters on the same field;
// other saved filters stay applied.
Pagination & truncation — four distinct signals
| Signal | Meaning | Recovery |
|---|---|---|
nextCursor set, truncated:false | More pages exist for this query. | Re-call with cursor: nextCursor. |
nextCursor:null, truncated:false | Last page; query fully consumed. | Done. |
nextCursor:null, truncated:true | Result hit row cap or wall-time. Paging cannot recover the rest. | Filter / aggregate / use export_view_csv. |
warnings: [...RESPONSE_TRUNCATED] | Page fit row-wise but envelope > 256 KB; rows trimmed post-hoc. | Project narrower columns or aggregate. Distinct from truncated:true. |
Error envelope
{ "error": { "code": "QUERY_TIMEOUT", "message": "...", "details": { ... } } }
Stable codes: DATASOURCE_OFFLINE, DATASOURCE_NOT_FOUND,
WORKBOOK_NOT_FOUND, VIEW_NOT_FOUND, JOB_NOT_FOUND, RENDER_JOB_NOT_FOUND,
EXTRACT_STALE (warning), QUERY_TIMEOUT, RESULT_TOO_LARGE,
INVALID_FILTER, INVALID_QUERY, SQL_FORBIDDEN_CONSTRUCT,
SQL_PARSE_ERROR, CHART_TYPE_UNSUPPORTED, SCHEMA_DRIFT,
INVALID_INPUT, INVALID_PAGE_SIZE, INVALID_CURSOR,
RESPONSE_TRUNCATED (warning), ENVELOPE_TOO_LARGE, RATE_LIMITED,
TOOL_NOT_FOUND, INTERNAL_ERROR.
Recovery semantics: operator runbook.
Limits, caching, & rate-limit shape
- Response envelope hard cap: 256 KB. Row-shaped responses trim with
RESPONSE_TRUNCATED; non-row responses fail closed withENVELOPE_TOO_LARGE. - Row caps:
query_datasource5000,query_view_data1000. Page sizes:list_*max 100. - Wall-time:
query_*5 s →QUERY_TIMEOUT;create_view_render5 s →status:"failed"+RENDER_TIMEOUT. - SQL pre-parse byte cap: 8 KB. INSERT / UPDATE / DELETE / multi-statement / foreign-table refs rejected before SQLite sees the query.
- Render cache key includes resolved
appliedFilters+viewUpdatedAt+datasourceLastRefreshedAt. Cache invalidates correctly when the view or its data refreshes. - Manifest:
ETag+Cache-Control: public, max-age=300, must-revalidate. Long-lived clients honorIf-None-Match→304. - Edge rate-limit (nginx):
POST /mcp60 r/min burst 20 per IP,GET /token30 r/min burst 5. Overflow returnsHTTP 429— agents treat asRATE_LIMITEDand back off.