LLM Analysis
This document explains the end-to-end flow of LLM-powered analysis in DeepLint, including how context is built, how the LLM is invoked, how results are parsed and displayed, and how errors are handled.
Overview
DeepLint's LLM analysis feature leverages Large Language Models (LLMs) to provide advanced linting, code review, and suggestions. The flow is designed to be modular, extensible, and robust.
Analysis Flow
Command Execution
User runs
deeplint check [options] [files...]
CLI parses arguments and resolves LLM configuration (CLI > env > config > defaults)
Context Building
The
ContextBuilder
scans the repository, collects changed files, related files, and metadataContext is assembled with code, diffs, structure, and statistics
Prompt Preparation
The LLM provider (e.g.,
OpenAIProvider
) formats the prompt using the template and contextCustom instructions (if any) are appended
LLM API Call
The provider sends the prompt to the LLM API (e.g., OpenAI)
Model, API key, and other options are resolved from config/env/CLI
Response Parsing
The provider parses the LLM response into a
LintResult
objectAdds metadata (timestamp, provider, model, context size)
Calculates summary statistics if not provided
Result Display
The CLI displays results using UI components (tables, banners, JSON)
Issues are grouped by file, with severity coloring and explanations
Error Handling
All LLM errors are wrapped in the
LLMError
class for consistent handlingCommon error cases:
Missing API key
API rate limits or quotas
Context too large for model
Network errors
Invalid or empty responses
Timeouts
Errors are logged and surfaced to the user with actionable messages
Fallbacks are provided (e.g., simple text output if table rendering fails)
Prompt Customization
The prompt template is defined in
src/llm/prompt-template.ts
Users can add custom instructions via the
--instructions
CLI flagThe prompt includes:
Identity and instructions for the LLM
Output format/schema
Full code context
Extensibility
New providers can be added by implementing the
LLMProvider
interfacePrompt templates and schemas can be updated as requirements evolve
The analysis flow is designed to support additional features (e.g., custom rules, multiple providers)
Developer Tips
Always validate and sanitize context before sending to the LLM
Use try/catch blocks around all LLM API calls
Log errors with enough context for troubleshooting
Keep prompt templates and schemas up to date
See Also
Last updated