Integration Guide
This is a portfolio reference build — there's no SDK to install. Integration is “call the REST API over HTTPS from your own code.” These examples walk through the common client flows and show how to run the full stack locally.
Calling the deployed API
Browser / fetch
From the browser or any fetch-compatible runtime. Same origin to the deployed web app OR an origin allow-listed via the API's FRONTEND_URL env var.
const res = await fetch(
"https://code-sensei-search-api.vercel.app/api/search/hybrid",
{
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({
query: "useEffect cleanup",
options: { limit: 10 },
}),
},
);
const { data } = await res.json();
console.log(data.results);Node.js (without next/react)
Pure Node script. No SDK — just native fetch (Node 18+).
import { setTimeout as sleep } from "node:timers/promises";
async function search(q) {
const res = await fetch("https://code-sensei-search-api.vercel.app/api/search/hybrid", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ query: q, options: { limit: 5 } }),
});
if (!res.ok) throw new Error(`${res.status}`);
return (await res.json()).data;
}
const { results } = await search("pgvector HNSW");
results.forEach((r) => console.log(r.title, r.similarity));Python
Python 3.10+ with `requests` or `httpx`.
import requests
r = requests.post(
"https://code-sensei-search-api.vercel.app/api/search/hybrid",
json={"query": "async await vs promises", "options": {"limit": 5}},
timeout=15,
)
r.raise_for_status()
for hit in r.json()["data"]["results"]:
print(hit["title"], hit.get("similarity"))curl
For one-off sanity checks or shell pipelines.
curl -X POST https://code-sensei-search-api.vercel.app/api/search/hybrid \
-H "content-type: application/json" \
-d '{"query":"JWT refresh token rotation","options":{"limit":5}}' \
| jq '.data.results[] | {title, similarity}'Running the stack locally
You can clone the repo and run everything end-to-end on your machine. You'll need Docker, Node 20+, and pnpm 10.
1. Clone and install
The monorepo uses pnpm workspaces.
git clone https://github.com/Shailesh93602/CodeSenseiSearch.git
cd CodeSenseiSearch
pnpm install2. Start Postgres (pgvector) + Redis
docker-compose.yml at the repo root brings up both. Ports map to the defaults (5432 / 6379).
docker compose up -d
# Check they're running
docker compose ps3. Configure env vars for the API
Copy the example file, fill in your Gemini API key. Leave the other defaults unless you changed the docker-compose ports.
cd apps/api
cp .env.example .env
# Edit .env:
# GEMINI_API_KEY=... (required for embeddings)
# DATABASE_URL=... (defaults match docker-compose)
# DIRECT_URL=... (same host as DATABASE_URL)
# REDIS_HOST=localhost REDIS_PORT=63794. Generate Prisma client + apply migrations
Writes the client to apps/api/src/generated/prisma. Migrations create the tables + the pgvector extension.
pnpm --filter api db:generate
pnpm --filter api db:migrate5. Run both apps in parallel
The web app assumes the API is on http://localhost:3001/api (set via NEXT_PUBLIC_API_URL). If you changed ports, edit apps/web/.env.local.
# From the repo root
pnpm dev
# Or run them individually
pnpm --filter api dev # http://localhost:3001
pnpm --filter web dev # http://localhost:3000What this project intentionally doesn't ship
Keeping the scope honest: there's no VS Code extension, no JetBrains plugin, no Slack bot, no hosted SDK package. Those would be separate projects. The contract is the REST API documented at /docs/api— anything you'd want to build on top lives in your own code.
If you need autocomplete while building against the API, hit the live Swagger UI at https://code-sensei-search-api.vercel.app/api/docs and generate a client from the OpenAPI spec with whatever tool you prefer (openapi-typescript, openapi-generator, etc).