Installation & Setup
- JetBrains Marketplace
- Custom Repository
- GitHub Releases
Install directly from JetBrains Marketplace: AutoDev Sketch
Note: The plugin is named AutoDev Sketch
in JetBrains Marketplace with ID com.unitmesh.autodev
. This differs from custom repository and GitHub release versions.
This version works with IDE 2024.1 and newer (241+).
- Go to
Settings
→Plugins
→Marketplace
→Manage Plugin Repositories
- Add the following URL:
https://plugin.unitmesh.cc/updatePlugins.xml
- Download appropriate version from GitHub Releases
- AutoDev-*-222.zip — For 2022.2 to 2023.2 versions
- AutoDev-*-233.zip — For 2023.3 version
- AutoDev-*-241.zip — For 2024.1 and newer versions
- Install plugin from disk in JetBrains IDE
Configuration
After installation, configure the plugin in Settings
→ Tools
→ AutoDev
Default LLM
Supported providers: Deepseek (#96), Moonshot AI, ChatGLM(#90)
- Basic Configuration
- Advanced Configuration
- Open AutoDev configuration in
Settings
→Tools
→AutoDev
- Configure
LLM Server URL
, e.g.:- Deepseek:
https://api.deepseek.com/chat/completions
- OpenAI:
https://api.openai.com/v1/chat/completions
- Deepseek:
- Enter your
LLM Key
(API Key) - Set
Custom Response Format
using JsonPath, e.g.:$.choices[0].delta.content
- Configure
Custom Request Format
, e.g.:{ "customFields": {"model": "deepseek-chat", "stream": true }}
For more detailed configuration options, see Custom LLM Server
Additional Models
Available model types:
Plan
: For reasoning and planning (Recommended: DeepSeek R1)Completion
: For code completionFastApply
: For fix patch generation (e.g.,Kortix/FastApply-1.5B-v1.0
)Others
: General placeholderAct: (Not ready) For action execution (e.g.,DeepSeek V3
,Qwen 72B
)Embedding: (Not ready) For embedding functions (e.g.,sentence-transformers/all-MiniLM-L6-v2
)
- DeepSeek R1
- GLM4-Plus
- DifyAI
{
"name": "DeepSeek R1",
"url": "https://api.deepseek.com/chat/completions",
"auth": {
"type": "Bearer",
"token": "sk-ii"
},
"requestFormat": "{ \"customFields\": {\"model\": \"deepseek-reasoner\", \"stream\": true}}",
"responseFormat": "$.choices[0].delta.content",
"modelType": "Plan"
}
{
"name": "GLM4-Plus",
"url": "https://open.bigmodel.cn/api/paas/v4/chat/completions",
"auth": {
"type": "Bearer",
"token": "sk-ii"
},
"requestFormat": "{ \"customFields\": {\"model\": \"glm-4-plus\", \"stream\": true}}",
"responseFormat": "$.choices[0].delta.content",
"modelType": "FastApply"
}
{
"name": "DifyAI",
"description": "Dify Example",
"url": "https://api.dify.ai/v1/completion-messages",
"auth": {
"type": "Bearer",
"token": "app-abcd"
},
"requestFormat": "{\"fields\": {\"inputs\": {\"feature\": \"$content\"}, \"response_mode\": \"streaming\", \"user\": \"phodal\" }}",
"responseFormat": "$.answer",
"modelType": "Others"
}
Configuration fields:
- URL: LLM server address containing endpoint path
- Auth: Authentication information (currently only supports Bearer token)
- RequestFormat: JSON structure for API requests
- ResponseFormat: JsonPath for extracting content from response
- ModelType: Model type (see list above)