Commands
This guide explains how to use the DeepLint default command to analyze your code.
Overview
The default command is the main command in DeepLint that runs when you don't specify a command name. It analyzes your code by:
Building context from your codebase
Analyzing the context for issues using LLM
Displaying the results with detailed feedback
This is the command you'll use most frequently when working with DeepLint.
Basic Usage
To run the default command, simply use the deeplint command without specifying a command name:
deeplintThis will analyze the staged changes in your Git repository and display the results.
Command Options
The default command supports the following options:
OPTIONS
--debug Enable debug output [default: false]
--dump=<file> Dump context to a file
--unstaged Include unstaged changes [default: false]
--verbose, -v Enable verbose output [default: false]
--json Output results in JSON format [default: false]
--provider=<provider> LLM provider to use [default: openai]
--model=<model> LLM model to use [default: gpt-4o]
--api-key=<key> API key for the LLM provider
--temperature=<temp> Temperature for the LLM (0-1)
--max-tokens=<num> Maximum tokens for LLM response
--instructions=<text> Additional instructions for LLM
--help, -h Display help for this commandContext Options
--unstaged
The --unstaged option includes unstaged changes in the analysis:
deeplint --unstagedBy default, DeepLint only analyzes staged changes. This option allows you to analyze unstaged changes as well.
LLM Options
--provider=<provider>
The --provider option allows you to specify the LLM provider to use:
deeplint --provider=openaiCurrently, only openai is supported.
--model=<model>
The --model option allows you to specify the LLM model to use:
deeplint --model=gpt-4Default is gpt-4o or the value of the OPENAI_MODEL environment variable.
--api-key=<key>
The --api-key option allows you to specify the API key for the LLM provider:
deeplint --api-key=sk-...If not provided, DeepLint will use the OPENAI_API_KEY environment variable.
--temperature=<temp>
The --temperature option allows you to specify the temperature for the LLM (0-1):
deeplint --temperature=0.7Higher values make the output more random, lower values make it more deterministic.
--max-tokens=<num>
The --max-tokens option allows you to specify the maximum number of tokens for the LLM response:
deeplint --max-tokens=8192--instructions=<text>
The --instructions option allows you to provide additional instructions for the LLM:
deeplint --instructions="Focus on security issues and performance optimizations."Output Options
--debug
The --debug option enables debug output:
deeplint --debugThis displays detailed technical information about the command execution, including:
Configuration values
Context building details
Token usage
Error details
--dump=<file>
The --dump option allows you to dump the context to a file:
deeplint --dump=context.jsonThis is useful for debugging and for understanding what information DeepLint is using for analysis.
--verbose, -v
The --verbose option enables verbose output:
deeplint --verboseThis displays additional information about what DeepLint is doing, such as:
Configuration loading and validation
Context building steps
Analysis progress
--json
The --json option outputs the results in JSON format:
deeplint --jsonThis is useful for integrating DeepLint with other tools or for parsing the results programmatically.
Command Aliases
The default command has the following aliases:
Full-Word Aliases:
run: Run DeepLint on staged changescheck: Analyze code for issues (for backward compatibility)lint: Analyze code for issues (for backward compatibility)analyze: Analyze code for issues (for backward compatibility)
Short Aliases:
r: Run DeepLint on staged changesc: Analyze code for issues (for backward compatibility)
You can use any of these aliases instead of the default command:
deeplint run
deeplint check
deeplint r
deeplint cExamples
Here are some examples of using the default command:
deeplintAnalyzes staged changes with default options.
deeplint --unstagedAnalyzes unstaged changes instead of staged changes.
deeplint --model=gpt-4Analyzes staged changes using the GPT-4 model.
deeplint --temperature=0.7Analyzes staged changes with a higher temperature for more creative suggestions.
deeplint --instructions="Focus on security issues."Analyzes staged changes with custom instructions for the LLM.
deeplint --jsonAnalyzes staged changes and outputs the results in JSON format.
deeplint --debugAnalyzes staged changes with debug output enabled.
deeplint --dump=context.jsonAnalyzes staged changes and dumps the context to a file.
deeplint --verboseAnalyzes staged changes with verbose output enabled.
deeplint --unstaged --verbose --model=gpt-4 --temperature=0.7Analyzes unstaged changes with verbose output, GPT-4 model, and higher temperature.
Understanding the Output
The default command output includes:
Analysis Summary
A summary of the analysis results, including:
Number of files analyzed
Number of issues found by severity (error, warning, info, hint)
Number of affected files
Example:
✅ Analysis complete: found 5 issues in 2 files
Errors: 1 | Warnings: 2 | Info: 1 | Hints: 1Issue Details
Details about each issue found, including:
Issue severity (error, warning, info, hint)
Issue location (file, line, column)
Issue message
Code snippet (if available)
Explanation
Suggestion (if available)
Example:
File: src/utils.ts
✖ error | Line 42 | Potential null reference
Code snippet: const result = user.profile.name;
Explanation: The 'profile' property might be null or undefined, which would cause a runtime error.
Suggestion: Add a null check before accessing the property: const result = user.profile?.name;Troubleshooting
No Staged Changes
If there are no staged changes, the default command will display a warning:
⚠️ No changes detected. Nothing to analyze.To analyze unstaged changes, use the --unstaged option:
deeplint --unstagedMissing API Key
If the OpenAI API key is missing, the command will display an error:
❌ OpenAI API key is required. Set it in .env file or pass it in the config.Make sure to set the OPENAI_API_KEY environment variable or use the --api-key option.
Token Limit Exceeded
If the context is too large for the token limit, the command will display a warning:
⚠️ Token limit exceeded. Some files may be truncated or excluded.To reduce the token usage, you can:
Reduce the number of files being analyzed
Increase the token limit in the configuration
Reduce the token limit per file in the configuration
Next Steps
Now that you understand how to use the default command, you can:
Configure DeepLint to customize the analysis
Set up Git integration to run DeepLint automatically
Learn about verbose mode for more detailed output
For more information about DeepLint's commands, use the help command:
deeplint helpLast updated