Deeplint is still in the MVP development phase and not yet available for use.

Commands

This guide explains how to use the DeepLint default command to analyze your code.

Overview

The default command is the main command in DeepLint that runs when you don't specify a command name. It analyzes your code by:

  1. Building context from your codebase

  2. Analyzing the context for issues using LLM

  3. Displaying the results with detailed feedback

This is the command you'll use most frequently when working with DeepLint.

Basic Usage

To run the default command, simply use the deeplint command without specifying a command name:

deeplint

This will analyze the staged changes in your Git repository and display the results.

The default command requires a Git repository to work. If you're not in a Git repository, or if there are no staged changes, the command will display an error message.

Command Options

The default command supports the following options:

OPTIONS
  --debug               Enable debug output              [default: false]
  --dump=<file>         Dump context to a file
  --unstaged            Include unstaged changes         [default: false]
  --verbose, -v         Enable verbose output            [default: false]
  --json                Output results in JSON format    [default: false]
  --provider=<provider> LLM provider to use              [default: openai]
  --model=<model>       LLM model to use                 [default: gpt-4o]
  --api-key=<key>       API key for the LLM provider
  --temperature=<temp>  Temperature for the LLM (0-1)
  --max-tokens=<num>    Maximum tokens for LLM response
  --instructions=<text> Additional instructions for LLM
  --help, -h            Display help for this command

Context Options

--unstaged

The --unstaged option includes unstaged changes in the analysis:

deeplint --unstaged

By default, DeepLint only analyzes staged changes. This option allows you to analyze unstaged changes as well.

LLM Options

--provider=<provider>

The --provider option allows you to specify the LLM provider to use:

deeplint --provider=openai

Currently, only openai is supported.

--model=<model>

The --model option allows you to specify the LLM model to use:

deeplint --model=gpt-4

Default is gpt-4o or the value of the OPENAI_MODEL environment variable.

--api-key=<key>

The --api-key option allows you to specify the API key for the LLM provider:

deeplint --api-key=sk-...

If not provided, DeepLint will use the OPENAI_API_KEY environment variable.

--temperature=<temp>

The --temperature option allows you to specify the temperature for the LLM (0-1):

deeplint --temperature=0.7

Higher values make the output more random, lower values make it more deterministic.

--max-tokens=<num>

The --max-tokens option allows you to specify the maximum number of tokens for the LLM response:

deeplint --max-tokens=8192

--instructions=<text>

The --instructions option allows you to provide additional instructions for the LLM:

deeplint --instructions="Focus on security issues and performance optimizations."

Output Options

--debug

The --debug option enables debug output:

deeplint --debug

This displays detailed technical information about the command execution, including:

  • Configuration values

  • Context building details

  • Token usage

  • Error details

--dump=<file>

The --dump option allows you to dump the context to a file:

deeplint --dump=context.json

This is useful for debugging and for understanding what information DeepLint is using for analysis.

--verbose, -v

The --verbose option enables verbose output:

deeplint --verbose

This displays additional information about what DeepLint is doing, such as:

  • Configuration loading and validation

  • Context building steps

  • Analysis progress

--json

The --json option outputs the results in JSON format:

deeplint --json

This is useful for integrating DeepLint with other tools or for parsing the results programmatically.

Command Aliases

The default command has the following aliases:

  • Full-Word Aliases:

    • run: Run DeepLint on staged changes

    • check: Analyze code for issues (for backward compatibility)

    • lint: Analyze code for issues (for backward compatibility)

    • analyze: Analyze code for issues (for backward compatibility)

  • Short Aliases:

    • r: Run DeepLint on staged changes

    • c: Analyze code for issues (for backward compatibility)

You can use any of these aliases instead of the default command:

deeplint run
deeplint check
deeplint r
deeplint c

Examples

Here are some examples of using the default command:

deeplint

Analyzes staged changes with default options.

Understanding the Output

The default command output includes:

Analysis Summary

A summary of the analysis results, including:

  • Number of files analyzed

  • Number of issues found by severity (error, warning, info, hint)

  • Number of affected files

Example:

✅ Analysis complete: found 5 issues in 2 files
Errors: 1 | Warnings: 2 | Info: 1 | Hints: 1

Issue Details

Details about each issue found, including:

  • Issue severity (error, warning, info, hint)

  • Issue location (file, line, column)

  • Issue message

  • Code snippet (if available)

  • Explanation

  • Suggestion (if available)

Example:

File: src/utils.ts

✖ error | Line 42 | Potential null reference
Code snippet: const result = user.profile.name;
Explanation: The 'profile' property might be null or undefined, which would cause a runtime error.
Suggestion: Add a null check before accessing the property: const result = user.profile?.name;

Troubleshooting

No Staged Changes

If there are no staged changes, the default command will display a warning:

⚠️ No changes detected. Nothing to analyze.

To analyze unstaged changes, use the --unstaged option:

deeplint --unstaged

Missing API Key

If the OpenAI API key is missing, the command will display an error:

❌ OpenAI API key is required. Set it in .env file or pass it in the config.

Make sure to set the OPENAI_API_KEY environment variable or use the --api-key option.

Token Limit Exceeded

If the context is too large for the token limit, the command will display a warning:

⚠️ Token limit exceeded. Some files may be truncated or excluded.

To reduce the token usage, you can:

  • Reduce the number of files being analyzed

  • Increase the token limit in the configuration

  • Reduce the token limit per file in the configuration

Next Steps

Now that you understand how to use the default command, you can:

  1. Configure DeepLint to customize the analysis

  2. Set up Git integration to run DeepLint automatically

  3. Learn about verbose mode for more detailed output

For more information about DeepLint's commands, use the help command:

deeplint help

Last updated