Deeplint is still in the MVP development phase and not yet available for use.

LLM Module

This document provides an overview of the LLM (Large Language Model) integration in DeepLint, including architecture, file structure, extension points, and best practices for extending or maintaining the LLM module.


Overview

The LLM module powers DeepLint's advanced code analysis using AI models such as OpenAI's GPT-4o. It is designed to be modular, extensible, and provider-agnostic, allowing for future support of additional LLM providers.


File Structure

src/llm/
β”œβ”€β”€ index.ts              # Exports all LLM components
β”œβ”€β”€ types.ts              # Shared types for LLM integration
β”œβ”€β”€ llm-provider.ts       # Provider interface
β”œβ”€β”€ openai-provider.ts    # OpenAI provider implementation
β”œβ”€β”€ prompt-template.ts    # Prompt templates and schemas
β”œβ”€β”€ parsers/              # (Future) Response parsers for different providers

Key Components

1. Types (types.ts)

Defines all shared types for LLM integration:

  • LintSeverity, LintIssue, LintResult

  • LLMAnalysisOptions

2. Provider Interface (llm-provider.ts)

Defines the LLMProvider interface:

export interface LLMProvider {
  analyze(context: LLMContext, options?: LLMAnalysisOptions): Promise<LintResult>;
}

All providers must implement this interface.

3. OpenAI Provider (openai-provider.ts)

Implements the LLMProvider interface for OpenAI:

  • Handles API key, model selection, and prompt formatting

  • Sends requests to OpenAI and parses responses

  • Adds metadata and summary to results

  • Handles errors with the LLMError class

4. Prompt Templates (prompt-template.ts)

  • Defines the main prompt template for linting

  • Exports the JSON schema for LLM results

5. Index (index.ts)

  • Exports all LLM components for easy import


Extending the LLM Module

To add a new provider (e.g., Anthropic, Gemini):

  1. Create a new provider file (e.g., anthropic-provider.ts)

  2. Implement the LLMProvider interface

  3. Add provider-specific configuration and error handling

  4. Register the provider in the CLI/config system (future)


LLM Analysis Flow

  1. Context Building: The context builder assembles code, diffs, and metadata.

  2. Prompt Preparation: The provider formats the prompt using the template and context.

  3. API Call: The provider sends the prompt to the LLM API.

  4. Response Parsing: The provider parses the LLM response into a LintResult.

  5. Result Display: The CLI displays results using UI components (tables, banners, JSON).


Error Handling

  • All LLM errors are wrapped in the LLMError class for consistent handling.

  • Common errors: missing API key, rate limits, invalid responses, network issues.

  • Errors are logged and surfaced to the user with actionable messages.


Best Practices

  • Use the provider interface for all LLM interactions.

  • Validate and sanitize all user input before sending to the LLM.

  • Handle all error cases gracefully and provide clear feedback.

  • Keep prompt templates and schemas up to date with the latest requirements.


See Also

Last updated