Getting Started¶
This guide covers prerequisites, installation, and verification for MegaBrain.
Prerequisites¶
Backend Prerequisites¶
-
Java 22 or higher
-
Maven 3.8 or higher
-
PostgreSQL 12+ (optional, for vector search)
-
Neo4j 5.x (optional, for graph database)
-
Ollama (optional, for local LLM)
For offline operation: run Ollama locally and pull models before disconnecting (e.g.ollama pull codellama). Inference uses only the configured endpoint; no internet is required at runtime.
Frontend Prerequisites¶
-
Node.js 18+ and npm
-
Angular CLI 20 (install globally)
Installation Steps¶
1. Clone the Repository¶
2. Backend Setup¶
cd backend
# Verify Java and Maven
java -version
mvn -version
# Compile the project
mvn clean compile
# Run tests
mvn test
# Start in development mode
mvn quarkus:dev
The backend will start on http://localhost:8080
3. Frontend Setup¶
cd frontend
# Install dependencies
npm install
# Start development server (with API proxy)
npm start
# or
ng serve
The frontend will start on http://localhost:4200 and proxy API requests to http://localhost:8080
4. Verify Installation¶
Backend Health Check:
Expected response:
Frontend:
Open http://localhost:4200 in your browser. You should see the MegaBrain dashboard.
5. CLI (optional)¶
When the backend is built for CLI mode, you can run the MegaBrain CLI. The ingest and search commands are available; use megabrain ingest --help or megabrain search --help to see usage and options. The search command supports filter options (--language, --repo, --type, --limit) and output options (--json, --quiet, --no-color); see CLI Reference for details. Use --json for scripting (e.g. megabrain search "query" --json or --json --quiet for the results array only). When you run an ingest (e.g. megabrain ingest --source github --repo owner/repo), progress is streamed in the terminal. Use --verbose for detailed progress and stack traces on errors.
6. Local LLM (Ollama) – offline operation¶
To use the local LLM without internet connectivity (AC3):
- Install and start Ollama on the same machine or a reachable host.
- Pull the model you will use while online:
ollama pull codellama(ormistral,llama2, etc.). - Configure MegaBrain to use that endpoint (default is
http://localhost:11434) and the same model name inapplication.properties(see Configuration Reference). - At runtime, all LLM requests go only to the configured Ollama endpoint; no external API calls are made. You can disconnect the network and continue using the local LLM.
Next Steps¶
- Configure the application for your environment
- Read the API Reference to start making requests
- Explore the architecture to understand the system