Skip to main content

AutoDev 1.0

· 4 min read

In April, through the article "AutoDev: AI Breaks Through R&D Efficiency, Exploring New Opportunities in Platform Engineering", we outlined the initial impact of AI on software development. We established several fundamental assumptions:

  • Large and medium-sized enterprises will possess at least one proprietary large language model.
  • Only end-to-end tools can achieve quality and efficiency improvements through AI.

Based on these assumptions, we began building AutoDev and open-sourced it. I've also documented all development insights on my blog, hoping to assist domestic enterprises in establishing their own AI-assisted programming capabilities.

As an open-source project, let's start with the GitHub address: https://github.com/unit-mesh/auto-dev.

Designing Three Assistance Modes Around Developer Experience

Initially, I didn't have a clear development blueprint. As a daily code-writing "expert-level" programmer, I built features based on my immediate needs.

Subsequently, I categorized all features into three assistance modes:

  • Chat Mode
  • Copilot Mode
  • Completion Mode

Auto Mode: Standardized Code Generation

Trigger method: All auto modes are under Context Actions, activated using the universal shortcut: ⌥⏎ (macOS) or Alt+Enter (Windows/Linux).

Design philosophy: Similar to the one-click pattern we designed in ClickPrompt. Code shouldn't be like various flashy demos online - it must consider existing team conventions, otherwise generated code remains unusable. Focusing on configurability and implicit knowledge scenarios, we implemented three Auto scenarios:

  1. Auto CRUD: Reads requirements from issue systems, uses a manually coded agent for continuous interaction to find suitable controllers, modify methods, add new methods, etc. Currently supports Kotlin and JavaScript.
  2. Auto Test Generation: Generates and automatically runs tests for selected classes/methods (when RunConfiguration is appropriate). Supports JavaScript, Kotlin, and Java.
  3. Auto Code Completion: Context-aware code filling. Capabilities vary by language due to limited resources:
    • Java: Incorporates code specifications
    • Kotlin/Java: Adds parameter/return type classes as context
    • Other languages: Uses similarity algorithms (no questions about inspiration sources) comparable to GitHub Copilot and JetBrains AI Assistant

Each auto mode includes automated context preparation. The following image shows visible context for code completion:

Untitled

This context combines configured specifications with BlogController-related fields, parameters, return types (e.g., BlogService), etc.

Additionally, hidden contexts exist, such as language declarations in AutoDev configurations:

You MUST Use 中文 to return your answer !

Interestingly, with just two "中文" mentions, there's about 50% chance of non-compliance. Considering adding three repetitions.

Companion Mode: Daily Workflow Integration

When designing companion mode, we referenced existing tools like AI Commit while addressing personal needs.

Since companion modes require waiting for LLM responses, they're grouped under AutoDev Chat.

However, JetBrains AI Assistant has become AutoDev's main competitor (and reference) since its release. Features like "Explain with AI" and "Explain error message with AI" demonstrate excellent UX - areas where we still have room for improvement.

In AutoDev, you can:

  • Generate commit messages
  • Create release notes
  • Explain code
  • Refactor code
  • ...and even generate DDL directly

Chat Mode: A Peripheral Feature

After UI redesign (inspired by JetBrains' approach, given their limited China support), we implemented one-click chat via Context Actions (see Figure 1).

Chat freely here.

Reflections on LLM as Copilot

Currently, LLMs serve as Copilots. They won't replace software engineering specialization but enhance professional capabilities through AI-assisted tools, impacting individual workflows.

They should address "tasks I avoid" and "repetitive tasks" - writing tests, coding, resolving issues, commits, etc. As programmers, we should focus on creative design.

AutoDev focuses on: How can AI better assist human work while keeping engineers within their IDEs?

The LLM as Copilot concept will see increasing tool refinement. As seasoned AI application engineers, we're contemplating how LLM as Co-Integrator can truly boost efficiency.

FAQ

How to Integrate Domestic/Private LLMs?

We provide a Custom LLM Server Python interface example in the source code. Due to limited resources, we've only tested with internally deployed ChatGLM2. For other needs, please discuss via GitHub issues.

Why Only Intellij Version?

As someone experienced in developing new language plugins, contributing to Intellij Community/Android Studio source code, and optimizing Harmony OS IDE architecture, I specialize in JetBrains IDE development.

When Will VS Code Version Arrive?

Short answer: Not soon.

Though I've studied VS Code/X Editor source code:

  1. VS Code lacks critical IDE interfaces
  2. Implementation challenges:
    • TextMate-based tokenization (unreliable Oniguruma regex)
    • Limited LSP implementation references
  3. No quality reference implementations

The ideal approach would be GitHub Copilot-style IDE-agnostic Agent mechanisms with TreeSitter for language processing.

Additional Notes

AutoDev positions LLMs as developer Copilots, providing assistance tools to handle tedious tasks, enabling engineers to focus on creative design and problem-solving.