Cloud or local? Both options exist in JetBrains From now on for help completing the AI code.
First it is available from the end of 2023. Includes a subscription to an additional subscription to commercial licenses. JetBrains suggests this “when you have a goal but aren’t quite sure how to implement it.”
The local option just made its debut, as part of version 2024.1 of the JetBrains product. It is currently included at no extra cost in IntelliJ IDEA Ultimate, PyCharm Professional, WebStorm, PhpStorm, GoLand, and RubyMine. Its integration, “in the coming months”, into Rider, RustRover and CLion is being discussed.
100M model for JetBrains IDE
Recommended usage context: “when you know exactly what you want to write and want to save about 20% of the necessary typing on the keyboard”. Therefore, we are not at the same level of assistance as with the cloud option. It must be said that the underlying language model is smaller: 100 million parameters, with a context window of 1536 tokens – about 170 lines of code. JetBrains says it trained him data sets code open source under a permissive license.
The cloud option is convenient for generating code blocks. The local option is limited to line completion. Mostly based on what’s before the pointer, but sometimes based on the contents of linked files. Conclusion has its own dedicated process, optimized for the target architecture (using CPU on x86-64, GPU on ARM64, etc.).
Cloud Assistant takes priority if enabled. As it stands, it is essentially based on OpenAI models. There is talk of a tie-up, in particular, with Codey and Vertex AI from Google.
For additional consultations:
GitHub Copilot & Cie, Creators of Tech Debt?
Stable code, lightweight among LLM coders
Which LLM to develop in cobol? AND measure a specific appears
Local LLM with Opera One browser experience