Deeplint is still in the MVP development phase and not yet available for use.

LLM Analysis

This document explains the end-to-end flow of LLM-powered analysis in DeepLint, including how context is built, how the LLM is invoked, how results are parsed and displayed, and how errors are handled.


Overview

DeepLint's LLM analysis feature leverages Large Language Models (LLMs) to provide advanced linting, code review, and suggestions. The flow is designed to be modular, extensible, and robust.


Analysis Flow

  1. Command Execution

    • User runs deeplint check [options] [files...]

    • CLI parses arguments and resolves LLM configuration (CLI > env > config > defaults)

  2. Context Building

    • The ContextBuilder scans the repository, collects changed files, related files, and metadata

    • Context is assembled with code, diffs, structure, and statistics

  3. Prompt Preparation

    • The LLM provider (e.g., OpenAIProvider) formats the prompt using the template and context

    • Custom instructions (if any) are appended

  4. LLM API Call

    • The provider sends the prompt to the LLM API (e.g., OpenAI)

    • Model, API key, and other options are resolved from config/env/CLI

  5. Response Parsing

    • The provider parses the LLM response into a LintResult object

    • Adds metadata (timestamp, provider, model, context size)

    • Calculates summary statistics if not provided

  6. Result Display

    • The CLI displays results using UI components (tables, banners, JSON)

    • Issues are grouped by file, with severity coloring and explanations


Error Handling

  • All LLM errors are wrapped in the LLMError class for consistent handling

  • Common error cases:

    • Missing API key

    • API rate limits or quotas

    • Context too large for model

    • Network errors

    • Invalid or empty responses

    • Timeouts

  • Errors are logged and surfaced to the user with actionable messages

  • Fallbacks are provided (e.g., simple text output if table rendering fails)


Prompt Customization

  • The prompt template is defined in src/llm/prompt-template.ts

  • Users can add custom instructions via the --instructions CLI flag

  • The prompt includes:

    • Identity and instructions for the LLM

    • Output format/schema

    • Full code context


Extensibility

  • New providers can be added by implementing the LLMProvider interface

  • Prompt templates and schemas can be updated as requirements evolve

  • The analysis flow is designed to support additional features (e.g., custom rules, multiple providers)


Developer Tips

  • Always validate and sanitize context before sending to the LLM

  • Use try/catch blocks around all LLM API calls

  • Log errors with enough context for troubleshooting

  • Keep prompt templates and schemas up to date


See Also

For more detailed technical information about the LLM module implementation, see the LLM Module Overview page in the developer documentation.

Last updated