circle-exclamation
Deeplint is still in the MVP development phase and not yet available for use.

Check Command

Note: The check command has been merged with the default command. Both deeplint and deeplint check now provide the same functionality. This documentation is kept for reference, but we recommend using the default command (deeplint) directly.

The check command runs DeepLint's LLM-powered analysis on your codebase, providing advanced linting, code review, and suggestions using AI.


Usage

deeplint [options]

or (for backward compatibility):

deeplint check [options]
  • DeepLint analyzes staged changes by default.


Options

Option
Type
Description
Default / Source

--provider

string

LLM provider to use

"openai"

--model

string

LLM model to use

process.env.OPENAI_MODEL or "gpt-4o"

--api-key

string

API key for the LLM provider

process.env.OPENAI_API_KEY

--instructions

string

Additional instructions for the LLM

none

--json

boolean

Output results in JSON format

false

--context

string

Context depth for analysis (light or deep)

"light"

--unstaged

boolean

Include unstaged changes in the analysis

false

--debug

boolean

Enable debug output

false

--verbose

boolean

Enable verbose output

false

--temperature

number

Temperature for the LLM (0-1)

0

--max-tokens

number

Maximum tokens for the LLM response

16384

--dump

string

Dump context to a file (specify filename)

none


Examples

Basic LLM Analysis

Analyze with a Custom Model

Add Custom Instructions

Output as JSON

Analyze Unstaged Changes

Set Temperature for More Creative Suggestions

Dump Context to a File


Output

By default, DeepLint displays results in a formatted table, grouped by file, with severity coloring and detailed explanations.

Example output:

If you use --json, the output will be a machine-readable JSON object matching the LLM result schema.


LLM Options and Precedence

LLM options can be set via CLI flags, environment variables, or config file. Precedence is:

  1. CLI arguments

  2. Environment variables

  3. Config file (llm section)

  4. Built-in defaults

See the Configuration Guide for details.


Troubleshooting

  • Missing API Key: Set OPENAI_API_KEY or use --api-key.

  • Model Not Supported: Check your OpenAI account/model access.

  • Rate Limits/Quotas: See your OpenAI dashboard for usage.

  • Network Errors: Ensure you have an internet connection.


See Also

Last updated