Lightweight local intelligence layer
Local Cabinet Brain
A private, static-first knowledge assistant for the 13-Cabinet Office. It searches the included resumes, cabinet roster, governance charter, AE positioning, and company doctrine without needing a GPU, database, or paid model provider.
Brain mode
Loading knowledge base...
Default mode is retrieval-only and runs in the browser. Optional local LLM mode can call Ollama or another OpenAI-compatible endpoint if you enable it locally.
Cabinet question console
Ask about roles, cabinet duties, resumes, governance, AE positioning, and filing cautions.
Optional local model bridge
Keep it light: Ollama or llama.cpp can be plugged in later.
This site does not require a model to work. The optional bridge is for when you want answers rewritten by a small local model such as a 1B–3B instruction model through an OpenAI-compatible endpoint.
Ollama supports OpenAI-compatible APIs, while llama.cpp and llama-cpp-python can also expose OpenAI-compatible local servers. This means the brain can later point at localhost instead of a paid cloud provider.
Endpoint test
Not tested.
Internal limits
This is a company knowledge brain, not an unauthorized legal filing engine.
The brain can explain the demonstrative cabinet structure, resumes, public profiles, governance language, AE positioning, and deployment steps. It should not claim fictional people are real officers, submit incorporation documents, invent licenses, or provide legal advice.
