# LLM Module

This document provides an overview of the LLM (Large Language Model) integration in DeepLint, including architecture, file structure, extension points, and best practices for extending or maintaining the LLM module.

***

## Overview

The LLM module powers DeepLint's advanced code analysis using AI models such as OpenAI's GPT-4o. It is designed to be modular, extensible, and provider-agnostic, allowing for future support of additional LLM providers.

***

## File Structure

```
src/llm/
├── index.ts              # Exports all LLM components
├── types.ts              # Shared types for LLM integration
├── llm-provider.ts       # Provider interface
├── openai-provider.ts    # OpenAI provider implementation
├── prompt-template.ts    # Prompt templates and schemas
├── parsers/              # (Future) Response parsers for different providers
```

***

## Key Components

### 1. Types (`types.ts`)

Defines all shared types for LLM integration:

* `LintSeverity`, `LintIssue`, `LintResult`
* `LLMAnalysisOptions`

### 2. Provider Interface (`llm-provider.ts`)

Defines the `LLMProvider` interface:

```typescript
export interface LLMProvider {
  analyze(context: LLMContext, options?: LLMAnalysisOptions): Promise<LintResult>;
}
```

All providers must implement this interface.

### 3. OpenAI Provider (`openai-provider.ts`)

Implements the `LLMProvider` interface for OpenAI:

* Handles API key, model selection, and prompt formatting
* Sends requests to OpenAI and parses responses
* Adds metadata and summary to results
* Handles errors with the `LLMError` class

### 4. Prompt Templates (`prompt-template.ts`)

* Defines the main prompt template for linting
* Exports the JSON schema for LLM results

### 5. Index (`index.ts`)

* Exports all LLM components for easy import

***

## Extending the LLM Module

To add a new provider (e.g., Anthropic, Gemini):

1. Create a new provider file (e.g., `anthropic-provider.ts`)
2. Implement the `LLMProvider` interface
3. Add provider-specific configuration and error handling
4. Register the provider in the CLI/config system (future)

***

## LLM Analysis Flow

1. **Context Building**: The context builder assembles code, diffs, and metadata.
2. **Prompt Preparation**: The provider formats the prompt using the template and context.
3. **API Call**: The provider sends the prompt to the LLM API.
4. **Response Parsing**: The provider parses the LLM response into a `LintResult`.
5. **Result Display**: The CLI displays results using UI components (tables, banners, JSON).

***

## Error Handling

* All LLM errors are wrapped in the `LLMError` class for consistent handling.
* Common errors: missing API key, rate limits, invalid responses, network issues.
* Errors are logged and surfaced to the user with actionable messages.

***

## Best Practices

* Use the provider interface for all LLM interactions.
* Validate and sanitize all user input before sending to the LLM.
* Handle all error cases gracefully and provide clear feedback.
* Keep prompt templates and schemas up to date with the latest requirements.

***

## See Also

* [LLM Types](https://github.com/deeplint-dev/deeplint-cli/blob/main/src/llm/types.ts)
* [OpenAI Provider](https://github.com/deeplint-dev/deeplint-cli/blob/main/src/llm/openai-provider.ts)
* [Prompt Template](https://github.com/deeplint-dev/deeplint-cli/blob/main/src/llm/prompt-template.ts)
* [Check Command Implementation](https://github.com/deeplint-dev/deeplint-cli/blob/main/src/commands/check-command.ts)
* [Configuration Guide](https://docs.deeplint.com/getting-started/configuration)
