LLM Module
This document provides an overview of the LLM (Large Language Model) integration in DeepLint, including architecture, file structure, extension points, and best practices for extending or maintaining the LLM module.
Overview
The LLM module powers DeepLint's advanced code analysis using AI models such as OpenAI's GPT-4o. It is designed to be modular, extensible, and provider-agnostic, allowing for future support of additional LLM providers.
File Structure
Key Components
1. Types (types.ts
)
types.ts
)Defines all shared types for LLM integration:
LintSeverity
,LintIssue
,LintResult
LLMAnalysisOptions
2. Provider Interface (llm-provider.ts
)
llm-provider.ts
)Defines the LLMProvider
interface:
All providers must implement this interface.
3. OpenAI Provider (openai-provider.ts
)
openai-provider.ts
)Implements the LLMProvider
interface for OpenAI:
Handles API key, model selection, and prompt formatting
Sends requests to OpenAI and parses responses
Adds metadata and summary to results
Handles errors with the
LLMError
class
4. Prompt Templates (prompt-template.ts
)
prompt-template.ts
)Defines the main prompt template for linting
Exports the JSON schema for LLM results
5. Index (index.ts
)
index.ts
)Exports all LLM components for easy import
Extending the LLM Module
To add a new provider (e.g., Anthropic, Gemini):
Create a new provider file (e.g.,
anthropic-provider.ts
)Implement the
LLMProvider
interfaceAdd provider-specific configuration and error handling
Register the provider in the CLI/config system (future)
LLM Analysis Flow
Context Building: The context builder assembles code, diffs, and metadata.
Prompt Preparation: The provider formats the prompt using the template and context.
API Call: The provider sends the prompt to the LLM API.
Response Parsing: The provider parses the LLM response into a
LintResult
.Result Display: The CLI displays results using UI components (tables, banners, JSON).
Error Handling
All LLM errors are wrapped in the
LLMError
class for consistent handling.Common errors: missing API key, rate limits, invalid responses, network issues.
Errors are logged and surfaced to the user with actionable messages.
Best Practices
Use the provider interface for all LLM interactions.
Validate and sanitize all user input before sending to the LLM.
Handle all error cases gracefully and provide clear feedback.
Keep prompt templates and schemas up to date with the latest requirements.
See Also
Last updated