Podcast

Podcast

Why the Top 1% of Devs Love IntelliJ

With

Anton Arhipov

3 Jun 2025

Intellij -> More than just an IDE
Intellij -> More than just an IDE
Intellij -> More than just an IDE

Episode Description

From early IntelliJ user to JetBrains advocate, Anton Arhipov continues his conversation with Simon Maple and Baptiste Fernandez on AI Native Dev, tracing his journey through tools, trends, and the changing face of developer experience. On the docket: • how IntelliJ responds to new tools in the market • understanding the pros and cons of tab-driven development for devs • why tools will need to get better in PR workflows • ZeroTurnaround’s Java reloading that ended JVM restarts • the four must-know IntelliJ plugins for every developer

Overview

Anton’s Journey from ZeroTurnaround to JetBrains

Anton Arhipov recounted his career evolution, highlighting his transition from ZeroTurnaround—famous for its JRebel tool, enabling Java class reloading without JVM restarts—to JetBrains. Initially joining JetBrains on the TeamCity project, he later specialized in Kotlin advocacy, speaking extensively about the language and its ecosystem.

AI’s Impact on IntelliJ and Developer Experience

Anton discussed the significant shift brought by AI tools like Copilot and Cursor, noting their transformative impact on developer workflows. These tools introduced capabilities like chat-based code generation and multi-file integration directly into editors, fundamentally changing user expectations around developer assistance and productivity.

Essential IntelliJ AI Plugins

Anton shared that IntelliJ currently hosts four critical AI-driven plugins: the AI Assistant, Junie (an agentic tool), Full Line Completion (FLCC), and Grazie (a spell checker foundational to IntelliJ’s AI functionalities). He elaborated on their distinct functionalities, ranging from inline completions and snippet generation to comprehensive project-wide code interactions.

Tab-driven Development and AI’s UX Challenge

The conversation delved into tab-driven development, a UX paradigm popularized by AI tools where developers accept suggested edits with minimal effort. Anton highlighted this approach’s productivity benefits but also mentioned the cognitive overload caused by constant suggestions. He stressed the balance needed in AI assistance to enhance rather than overwhelm developer workflows.

Future of PR Workflows and Debugging with AI

Anton envisioned a future where AI-driven development significantly affects pull request (PR) workflows and debugging practices. He underscored the necessity for improved PR visualization tools to manage AI-generated code and emphasized the increasing importance of robust testing frameworks, given the challenge of verifying AI-generated tests and code quality effectively.

Chapters

00:00 Trailer
00:48 Intro
05:28 IntelliJ and AI Tools
14:45 Live Demo
32:29 Q&A
35:05 Outro

Full Script

[00:00:54] Simon: Hello and welcome to another episode of the AI Native Dev. I am still here at, uh, DevOxx. And joining me is actually an old friend from many years ago, Anton Arhipov. Anton, welcome to the AI Native Dev.

Anton: Happy to be here. 

Simon: We go back many years. Um, yes, we first met each other. I don't think we met when I was back in the IBM days.

[00:01:16] Simon: But, I joined this 2012. 

Anton: 2012. 

Simon: Yeah, that's, yeah, that's exactly right because I joined ZeroTurnaround in 2012. ZeroTurnaround, for those who don't know, it's kinda like a, it was a productivity company that helped Java developers.

[00:01:32] Anton: It was a very interesting project. 

[00:01:34] Simon: Very interesting. Yeah.

[00:01:34] Anton: We built instrumenting agents for fast through loading on of Java classes on the fly without bringing down the JVM. It was like a PHP feel when you are using this tool. So you don't have to restart. You make the change, you instantly see it in the, in the app.

[00:01:50] Simon: And 'cause back in the day as well, when app servers were pretty sizable.

[00:01:54] Simon: Yeah.

[00:01:57] Anton: I remember we did some research and the average startup time was three minutes. 

Simon: Wow. 

Anton: Three minutes. It's the average. 

Simon: Yeah, that's right. 

Anton: The longest I have seen was over 30 minutes. 

[00:02:07] Simon: I had, I heard 90 at one stage from a customer. 

Anton: Oh my God. 

Simon: Just, just to reload their server. So Yeah. But lucky we're not in those days anymore.

[00:02:13] Simon: Yeah. And um, so yeah, people will probably remember that towards JRebel for the Java devs there. Exactly. You moved from there to JetBrains.

Anton: Yes, I did. 

Simon: And, um, you were a massive power user of IntelliJ. Back in the day? 

Anton:  Yes. 

Simon: Before you, before you worked for JetBrains. I think it was a really natural move.

[00:02:30] Anton: It was a logical move for me to go to JetBrains, but my first, uh, project wasn't in IntelliJ because I thought I want to learn new stuff.

[00:02:37] Anton: And then, uh, I actually, uh, joined TeamCity.

[00:02:41] Simon: Oh, yes, you did. That's right. 

[00:02:43] Anton: Yeah. For a couple of years I did advocacy for TeamCity, providing feedback to the product, talking to the users, doing the webinars. Learning about new technologies because I was always in Java. Yeah. And CI is not only about Java, it's also about containerization, uh, delivery, different technologies.

[00:03:01] Anton: Even PHP, you know, like Ruby, Python, whatever else, you, you still need to run the tests. In CI, you still need to run the build, whatever the jobs are.

[00:03:10] Simon: It's a little bit similar to me. When I moved into Snyk. Again, it was more of an ecosystem-agnostic kind of move. Well, obviously there were languages that were supported, but you care less about any specific ecosystem.

[00:03:20] Simon: And now, are you in, are you moved into a Kotlin for a little while as well, right? 

[00:03:24] Anton: Yes. Uh, for five years. I’ve been five years

Simon: Five years!

Anton: in Kotlin. Mm-hmm. Uh, so I was like focused on Kotlin 100%, mostly on the server side of it. Yep. Uh, so if you've been following the news about Kotlin, you probably have seen my face speaking about new language features and so on.

[00:03:41] Anton: I still do that. Mm-hmm. Uh, so at this conference I actually did a talk about the new stuff coming into the two and to the four. Well, IntelliJ is always there. Yeah. I'm just naturally always speaking about IntelliJ, because if you use Java, if you use Kotlin, chances are you still end up in IntelliJ. And, uh, since AI is all the hype.

[00:04:02] Anton: And it's so unpredictable, like literally two years ago, you give the same prompt two times, uh, within a five-minute gap, you get a different answer.

 Simon: Yeah. 

Anton: So somehow showing the demos is very stressful with that. Yeah, yeah, yeah. So my colleagues

[00:04:17] Simon: It takes live coding to another level, right?

[00:04:18] Anton: Yeah. So, my colleagues were all like, okay, we need a demo,

[00:04:22] Anton: let an Anton do that. I'm like, okay. I, I feel I'm feeling adventurous and somehow I got into this more and more, and now I, I was told to, okay, Anton, you probably just enjoy this.

 Simon: Yeah, yeah. 

Anton: Like, and let's do it so that you cover all the AI in IntelliJ.

[00:04:39] Simon: Yeah. Yeah.

[00:04:40] Anton: Right. So we have multiple, multiple plugins, actually.

[00:04:43] Anton: Mm-hmm. Actually there's four. 

Simon: Four? So I know of two.

Anton: I know you probably don't realize, but there's four.

[00:04:49] Simon: I know of AI assistant. I know of, uh, Junie. Yeah. What are the other two?

[00:04:54] Anton: There are two embedded plugins in IntelliJ. One is called, uh, the full line completion, FLCC. 

Simon: Okay. Yep. 

[00:05:03] Anton: That provides you the, uh, full line completion. Now, it's not only the full line completion, it actually can provide you multiple lines as well. Mm-hmm. But those are short snippets. Mm-hmm. And the other one where it all started is Grazie, which is a spell checker. Ah, so the platform behind AI assistant and, uh, partially Junie as well. And all the completions that we have are based on the project that was called Grazie.

[00:05:25] Anton: Yep. For spell checking. 

Simon: Interesting.

[00:05:27] Simon: So let's talk about, actually, let's, let's take a step back and talk about, obviously with, with JetBrains, like JetBrains is one of those kind of companies that has a huge amount of loyalty from its users around the IDEs that they have. Yeah, like from my background and your background in Java, in, in the JVM, IntelliJ is a hugely, hugely used and loved tool for developers.

[00:05:56] Anton: I hope it's loved, like we receive a lot of critical feedback as well, because like the more users you have, the more different use cases you have, the more, uh,

[00:06:05] Anton: unexpected ways of using the product you will actually discover. Mm-hmm. And of course, not all the edge cases are covered. Sometimes, like everybody has bugs. Our bug tracker is full of different reports and we are receiving new ones every day. So, yes, we have feedback from side to side, like very positive and also some negative.

[00:06:31] Anton: Mm-hmm. Uh, but this is about like any product out there. I think like popularity comes with a cost. 

[00:06:41] Simon: Yeah. Yeah. Absolutely. So you've got four products today. I think it's fair to say in the early, in the very early stages when, when the likes of Cursor and Copilot and a and a number of AI tools came out. It rocked the world in terms of, you know, developers being able to do all these amazing, wonderful things.

[00:07:00] Simon: A lot of the tools though, kind of, if you look at, I. Obviously, I think RU code does it as well. Cursor and a number of others, they fork VS Code. Yeah. And effectively, if you want to use the, if you want to use the AI tool, you go over to VS Code and you, and you use that fork.

Anton: Exactly. 

Simon: Um, how, how did that kinda like, not affect JetBrains, but what was the reaction from, from, I guess, JetBrains, uh, IntelliJ users, for example, to potentially think, oh, maybe I could look at that because I want to use Cursor.

[00:07:28] Simon: It's a, it's a bit of a tricky one.

[00:07:30] Anton: Well, today, like among my friends, I see this pattern being used that they use some VS for AI.

[00:07:37] Simon: Yeah.

[00:07:37] Anton: And they keep IntelliJ open for inspecting the code, navigating, doing the real development with their hands as well. Of course. Well, it’s a twofold situation. Like if I take a step back and look at what happened, like,

[00:07:52] Anton: like two, three years ago, the first one was actually Copilot, right? 

Simon: Yeah. 

Anton: With the completions. Mm-hmm. And, uh, I, I was hosting some live streams at that time, and I've got some guests who were using Copilot, but for live coding, especially on the streams, it seemed that this completion was getting in the way more than it actually helped.

Simon: Yeah, yeah. 

[00:08:12] Anton: It wasn't so smart, but the models weren't so smart. Now they are getting smarter, you get better quality, mm-hmm, from the product. Then those, uh, forks appeared as well that incorporated co-generation.

Simon: Yeah. 

Anton: Into the tooling, like chat-based programming. Mm-hmm. Actually, Copilot X appeared at that time as well.

[00:08:31] Anton: Now you have a chat and you can ask a question from, uh, from the LLM and it gives you some answer, and then you decide yourself how you would, what you would do with the response, right? Yeah. How, how can you integrate with, uh, that into the project? Now, Cursor was the first one who figured out this UX around the chat and the code generation inside the editor.

[00:08:54] Anton: Yeah. Yeah. So that you could iterate quicker. So the first stage is to get the answer to, to generate the code. The other stage is to quickly put it back into the project and test it, right?

Simon: Yeah. Yeah. 

Anton: So Cursor got it right. 

Simon: And do that across multi-files as well is the core thing. 

[00:09:12] Anton: Exactly. Yeah. But then, but then you are the agent. Yes. In this case, you decide what to integrate, where, how to iterate over the problem that you have at hand, what tests to run, like mm-hmm. Uh, do you have to run the whole suite or whatever. And now we have agents. Windsurf was the first to actually show, uh, how it could be integrated in the IDE.

[00:09:34] Anton: You give it a high-level task, maybe break it down into multiple tasks, and tell it to iterate over the problem. It figures, uh, generates the code, integrates it into the project, runs the tests. So you are now a supervisor. Mm-hmm. Right. Great. What was the reaction from, uh, from the users? Of course, they want the same stuff.

[00:09:54] Anton: Yeah. IntelliJ because they don't want to move away in, uh, in an unfamiliar environment.

[00:10:00] Simon: And I, and I think actually from that, IntelliJ users or JetBrains users will probably, I would say, be least affected compared to others, probably across all the IDEs, because I see that loyalty for, for people who are, who are, who have loved IntelliJ over many, many years and, and will actually resist a change.

[00:10:20] Simon: They'll fight, they'll fight for that license. Right?

[00:10:22] Anton: There are also some things that in IntelliJ, some features implemented, that feel like AI, but they are predictable and implemented inside the platform. Like for many, many years. 

Simon: Yes. 

Anton: But even this, like, very funny situation is that we have been teaching people how to refactor for many years.

[00:10:41] Anton: Yeah. For 20 years. Mm-hmm. And teaching them that renaming, refactoring is just this Shift+F6 shortcut. But now people are like, oh, I want the next edit prediction. Yeah. It's because it's a Tab key and all you do is tab-driven development. You just accept the changes that the next edit prediction throws at you.

[00:11:02] Anton: This is something we don't have in the IDE yet, but I know the team is working on that. Mm-hmm. So soon we will have that as well. But Cursor and Windsurf and whoever else, they implemented that, and I was speaking, like talking to my friends who use that, and they say, this is all we need. Yeah. They just need this next edit prediction, and it actually replaces a lot of completions functionality.

[00:11:25] Anton: Yeah. So, you know, like naturally you would type a dot and get a completions list.

Simon: Right. Right. 

Anton: With AI, you get a full line or a snippet that you can accept line by line, or, so it's still local. Yeah. But completion does not have to be local to the place where your cursor is. Mm-hmm. It can be somewhere else in the file as well.

[00:11:43] Anton: Like, so if you are renaming, uh, function name manually, our next logical step is that you rename the call sites. Mm-hmm. Can be inside the file itself, but they can also be somewhere else in, like, in the project. Uh, all the call sites need to be updated. This is something that you would normally do with refactoring, but then you have to learn all those refactorings.

[00:12:06] Anton: Yes. And people are lazy. Yeah. Like it's a natural thing. Yeah. Like if you can do it in the simplest way possible, hit the tab key. This is like,

Simon: Just go with the grump work. 

Anton: Yeah. This is like, you, you immediately become productive. Yeah. So I, yeah.

Simon: Yeah, avoid mistakes. Avoid mistakes as well. 

Anton: But this is a very thin, like, very intimate UX problem.

[00:12:27] Anton: I, I spent some time, a few hours straight, in Cursor, uh, playing with this, uh, next edit prediction, and I felt like my head was exploding with all those completions streaming at me. Yeah. Like constantly, and I constantly had to like track what is the prediction, do I want to accept it or not, do I want to move on or not?

[00:12:50] Simon: It's an interesting model because do you get more into the flow of just accepting, accepting,accepting? Yes. It actually turns into more of a Vibe coding because you just, you just agree with what it does and then probably rely more on your tests to say, is this actually doing the right thing versus validating it line by line.

[00:13:05] Simon: We are gonna get lazy as developers.

[00:13:06] Anton: Right. But, but see, like even there, you, you still track the code, right? Yeah. You are still watching the code. So compared to “vibe coding” where you don't look at the code with the agents, mm-hmm, or where you just tell it what you want to have and it generates and generally just check sometimes, like does it look right?

[00:13:26] Anton: Cause it can generate a lot of code, right? Yeah. It's like accepting a huge PR. Mm-hmm. So I think one of the effects that we will have is that tools need to get better for the PR acceptance.

Simon: Yes. 

Anton: Like in the code reviewing functionality. 

Simon: Yes. Yes. 

Anton: Visualizing the code. So there were many attempts at visualizing and getting insight into the project, like how it's structured, what the architecture is, what the relationships between the components are, and so on.

[00:13:56] Anton: And this is now becoming critical. So we will probably see a lot of developments in that area as well. Yeah. Awesome. And debugging as well. Debugging.

Simon: Oh, very much. Yeah. 

Anton: Like when we have TDD and all these BDD practices, right? Why do you need the debugger? You write the test. The test is your debugging tool, right?

[00:14:17] Anton: Yeah. And now, yes, the LLMs can generate the tests, and we have some companies here offering exactly this functionality. But now it probably comes to the question: can you trust it? Yes. Is this test really correct? Mm-hmm. Mm-hmm. And what if there is no test, but you have the code?

[00:14:37] Simon: It's, it's almost gonna get more critical to review those tests.

[00:14:41] Simon: Than it is the, uh, than it is the code that gets generated. 

Anton: Exactly, exactly. 

Simon: Let’s jump into the code. We’ve talked a little bit about the AI assistant and Junie. Why don’t we do a quick demo? For those who are listening only, we’ll describe what we’re doing as we go.

[00:14:55] Simon: Probably best viewed on, uh, on YouTube. Um, so, so do check it out on YouTube as well. But, um, let's, let's, let's jump into the code.

[00:15:02] Bap: Hi! I’m Bap, developer advocate and AI native dev. Just a quick heads-up: the podcast went on strike right as Anton was about to demo. That said, at our latest conference, AI Native DevCon, Anton was invited to share his thoughts about agentic workflows and demo Junie, the latest AI coding agent from JetBrains.

[00:15:23] Bap: Let's go straight to that.

[00:15:24] Anton: Hello everyone. Today we are going to talk about AI tooling in IntelliJ. My talk will cover a few stages starting from completions inline completions cloud completions various AI actions chat modes in IntelliJ and finally reaching out to the agentic tool we recently announced, Junie.

[00:15:49] Anton: And I will walk you through the phases. So let me actually start with sharing the screen. We are here. I have IntelliJ, but we’ll start from the diagram. In fact, I have a little slide. That’s the only slide I have. Everything else will be inside IntelliJ. 

[00:16:14] Anton: So those are the stages of AI assistants inside the editors, just my creation. Okay. I haven’t created all the tools, but it’s something I came up with for explaining the whole subject. So on the left, we have minimal assistance from the tools where you have very minimal completions, single line completions or single statement completions inside the editor and reaching out to the most right end of the spectrum.

[00:16:41] Anton: You have all this agentic mode programming where you kinda lose, stop tracking what kind of code you’re actually creating by just guiding the tool what kind of functionality you want to generate. And we start with a very basic one. So we start with minimal assistance from the tooling.

[00:17:00] Anton:Let’s see how it looks inside IntelliJ. I have a Spring Boot Application, it’s a very minimal one. It’s just one controller. There are a few implementations of the Algorithm interface. They are just geometry filters, basically. And let’s say I want to implement functionality that will select the implementation based on a configuration option.

[00:17:28] Anton: So that’s the scenario we’re trying to implement. And at the moment, I assume that I’m really well aware of what kind of code I want to write, and I don’t need the AI tool to actually write the code for me. I want to do it myself. I want to stay in control. So what kind of code would I write here? Maybe a function?

[00:17:57] Anton: A function that selects Algorithm by a property that is injected from the configuration, and a function that selects Algorithm by a property that is injected from the configuration, and that’s going to be something like this. There’s the dollar. I can get a type, a property name, algorithm type, and then property parameter name should be type of type String. String. And, well, as you see, I’m not really getting a lot of assistance by the AI yet.

[00:18:31] Anton: Uh, that's gonna be,

[00:18:36] Anton: right there and that's gonna be, uh, an algorithm. So we first see some completion happening right here at the end of the line because what is happening right now as I type is there is a local model working inside the IDE. Provided by a plugin, code full line completion that looks at the code, skipping the comments that it sees above the cursor in the same file, and tries to come up with a suggestion.

[00:19:10] Anton: In fact, I can visualize what is happening. Let me complete this line with an internal tool that I have installed for myself. So there are stats happening. This is something I installed just to visualize for this demo to visualize what is happening when I hit the buttons and the IDE is working on providing different suggestions.

[00:19:38] Anton: So as I type here, I know exactly what code I want to type, right? I want to check for the value of the String, and if it matches some sort of value, I would return a class or a type instance. So return. And as you can see on the bottom right, the model has calculated that with a probability of 36%.

[00:20:05] Anton: The next statement that should satisfy my requirement is going to be a when with the type subject, and it also renders it in the editor. That's cool. Cool information for me to check how the AI completion works, but let's see how it works for the next statements I’m going to type. As you can see, it found some suggestion by the model that is working on my machine right now.

[00:20:34] Anton: But this suggestion doesn't seem to be relevant. It might be red code, it might be irrelevant code, maybe even whatever the model generates. So for this kind of workflow where I am exactly aware of what kind of code I want to write, I just need very fast completions to help me with the task that I'm currently working on and not to get in a way that's the most annoying thing when the tool is getting in a way. We would rather not show you any completion than show you something incorrect or something that is hallucinated.

[00:21:13] Anton: So if I continue, let’s say quick is one of the values I want to get. I can see that the model is generating something, and quick call is the model. I want to get an instance on the next line. It already suggests something that is relevant and the code is not red, so I can accept it and it’s very minimal.

[00:21:42] Anton: That's the very start of AI-assisted programming insights in IntelliJ IDEA, and it's configurable. We have options for inline completions where you can enable or disable full-line completions based on the language and download the relevant models. I currently have all of them downloaded.

[00:22:00] Anton: Therefore, they just display as checkboxes, but normally you would have a download button there. We also have another step in the story where we can say that we know what we want to get, but we are too lazy to type, and we want to get larger code snippets from the models as assistance, so we can enable the cloud completion suggestions.

[00:22:29] Anton: They can suggest full snippets of code. There are limits of course it won't generate like 1000 lines of code but it'll generate sensible code snippets that you can accept partially talking by talking or line by line. It has a few levels of configuration where you want the filter that works inside the IDE to filter out more.

[00:22:57] Anton: To allow less red code or let's say less creative code, then you would select the focused mode for this completion or you want to get more creative and you are fine getting some interesting code snippets from the model. I have currently enabled the balanced mode. Let's see how it works. I feel adventurous today.

[00:23:20] Anton: So I have enabled Cloud completion and let's see how it behaves with the same, uh, scenario that I just had. So previously, as I said, the full line completion would ignore the comments, so it would only look at the code snippets. But here I am, I already added a comment of a function that I want to write, uh, and the cloud completion would account for that.

[00:23:42] Anton: So if I just type it here. It already sees that while there are implementations of the algorithm, there is an interface. We are in the context of, uh, Spring Boot Application. We want to read a configuration option and so on, and it comes up with this full complete correct code snippets where I could now accept either partially.

[00:24:12] Anton: Let’s see. Uh, I have something with my, uh, keyboard right now, or just completely, uh, accepting those snippets as is. Maybe I don’t want some snippets, some part of the snippet. And, uh, I would rather throw an exception if, uh, if I don’t match any of the values from the configuration. And, uh, yeah, currently you can see that Mellum is the completion provider in the diagnostics right here in, in the bottom left, uh, completion model that we, we recently open sourced, uh, to the community.

[00:24:56] Anton: You can download it and experiment with that as well. So as you can see, we have two levels already, uh, of completions where, where I know exactly what I want to do and I want to control every statement or I can lose the ends up a little bit. And, uh, uh, I’m fine with, uh, the completion generating various code snippets like in multiple lines of code.

[00:25:21] Anton: Let’s go further. Uh, the next level is in editor generation with minimalistic prompts, meaning that you are still, uh, focused on the code that you are currently writing. Assume, um, let’s try generating a function that would split a list of values for us. Split, uh, a function, function or a utility function.

[00:25:50] Anton: To split a list of elements into a list of lists of elements by an example element. So this would invoke a local code generation, and it’s kind of similar to the snippet completion that we just saw. The difference is that the completion would run or generate the code in the place of the cursor, but inline code generation would actually span, uh, across the file.

[00:26:27] Anton: It might add imports, it might change the function names, it might split functions. It might generate some additional components for, for this to work. And of course, it’s also double-checked for the red code, for the relevance and so on. So now, now we have the patch right in the, in the code. We can do some follow-up here to correct our requirements through the generating code.

[00:26:56] Anton:Maybe what I see here is not, uh, exactly corresponding to what I want. Uh, maybe I want to add extra constraints or I can just accept it. Uh, of course. Further on, I can either regenerate this code somehow, say that I don’t want functional style or vice versa. I want it to be mutable, uh, or start calling.

[00:27:19] Anton: The AI actions and the AI actions actually start getting you out of the current screen, uh, current file and, uh, maybe generate some code outside of that file. So for instance, we can ask it to generate unit tests. So currently what is happening? The AI assistant matched the name or the, the suffix of the file, uh, that we, that we used to call the action from, and it created a relevant name for the test file and created a few test cases for me to—

[00:28:00] Anton: Uh, to satisfy the functionality that I just generated for, for that function. As you can see, we are already loosening our control, uh, a little bit more. We are not focusing on a single line of code. We are not focusing even on a local snippet of code. We are now, uh, generating the code in different places of the file and even in different files.

[00:28:25] Anton: So let's, uh, accept that I'm not sure if the tests would fail or not. Let's assume that some of them should fail. Uh, some of them probably will pass. Uh, but, uh, currently our goal is not to test the code. The goal is to explain at which stage we are. So we generated the code inside the editor. In fact, we can move on and, uh, look at the chat now.

[00:28:58] Anton:  So from this point on, we are getting out of the local file. We are shifting our focus out of the editor. As you can see, the tests, uh, some of the tests failed. Some of the tests, uh, didn’t fail, but, uh, uh, we can take a different flow and start fixing them and asking the AI assistant how to fix them.

[00:29:23] Anton: That’s, that’s not in focus right now. So let's get ourselves into the chat. How can we do that? For instance, the, uh, function that we just generated this one we can ask the, uh, AI assistant to start doing something with the snippet of code, like start chat using this selection. We can either take the full file there or adjust the function and ask.

[00:29:50] Anton: The AI assistant to do something with, with this code like transform it or maybe test it somehow differently, suggest something. There is now a dilemma. Not really a dilemma, but there is a one important you have to account for is that by default, the chat would not look into the whole project, right? It it, we just asked it to do something with a little snippet of code.

[00:30:17] Anton: So we already take this action to isolate it. Uh, but we can choose to tell it to look at the whole code base, please. Uh, when, whenever action. I. Uh, ask you to do, please scan the project for the relevant for the relevance of my query, and maybe populate the context with additional information from the source code.

[00:30:39] Anton: Uh, depending what, what will I ask for? So maybe I will ask to rewrite, rewrite this code, uh, in imperative style. The, uh, what will happen, the chat will retransform my query. Ask the platform if there are any relevant resources that we can add into the context. Currently, there were no additional, uh, files attached, only the selection and the current file actually.

[00:31:12] Anton: And then we just got the response. So at this point, we are, um, kind of an agent right now. We are the agent. We take the decision what kind of code we want to integrate back into the project. What, uh, tests we have to run to assure that the code is correct and so on. So there is a, an extra mode for figuring out, uh, or automating integration of the code generated, uh, model generated code back into the project.

[00:31:45] Anton: We are currently in the chat mode, but we could have automated that as well. So there is a beta feature. I'm not going to run that, uh, at the moment because we want to get to agentic workflows quicker, but at this point I can just tell, uh, the AI system to integrate the code correctly and, uh, it'll display.

[00:32:05] Anton: I. Me the diff that, uh, applies, applies the code snippet. And, uh, just to quickly demonstrate what could happen, uh, so we, we gonna switch into code base mode. Uh, we gonna tell the chats that it can observe whatever, uh, we have in the project. Yes.

[00:32:25] Bap: Sorry Anton. Um, this is super exciting and, and, and, and just wanted to flag very, very quickly that there's a lot of people who are asking you a ton of very relevant question and we are jumping in one minute with the next talk.

[00:32:40] Bap: I have tried to curate like, uh, some of these questions, so I'm just gonna. In the next one minute, if you can try to answer multiple questions at once. So at least these were the most relevant.

Anton: Um, okay. 

Bap: First of all, we've had a couple of people wanting to ask you, uh, and I think you mentioned it throughout your talk of like what libraries were used, but, uh, could you restate, uh, what JetBrains like AI assistant and like Junie, like what stack is it based on?

[00:33:07] Bap: What languages is it based on? And a very quick follow up. Like, how does Junie understand like the legacy system and like the constraints, uh, for each, uh, enterprise or like a company or project?

[00:33:21] Anton: Right. So we have two uh, plugins. I have been showing only the part for the AI assistant, and I just executed the query for Junie to run on the screen.

[00:33:31] Anton: While I'm answering the question, the AI assistant provides completion, the model selector. You can chat with your project and, uh, you are the agent who decides what kind of code to integrate and what tests to run. Junie, in contrast, just automates all that for you, so it'll iterate on the problem that you have asked for.

[00:33:56] Anton: Like, for instance, in, in this workflow that I just, uh, started, I asked Junie to implement the persistence layer, which means that it will span the, the code changes will span across the entire project, starting from the controller to services, configuration files, build files, maybe for adding the additional.

[00:34:17] Anton: Uh, libraries and so on. So the stack, uh, you mean the implementation probably then? Uh, it's all implemented in Kotlin, of course, because most of the, uh, plugins that we built and like we, we develop for Intellij platform, they are built with Kotlin. Uh, it works for any language, but for Junie specifically, what we want to do is to be able to.

[00:34:44] Anton: Integrate with the platform specific or technology specific tools, meaning if I generate code for Java, I should be able to, uh, run the Java compiler or Java linter or some Java build tool like Maven or Gradle. And get, uh, maybe run the tests. So I need to integrate with, uh, test runner and so on. And that is technology specific, right?

[00:35:12] Anton: So, uh, for Junie, we do that. For Junie, it depends what kind of technology you use in the project. However, code generation doesn't depend on that. So even if Junie does not support some technology like running Python tests inside IntelliJ maybe because it's implemented for PyCharm, uh, Junie is still able to generate the code that you ask for, uh, even though it's not supported inside the platform.

[00:35:41] Anton: For instance, you may want to generate some C++ code in, uh, inside IntelliJ and there is no C++ tooling by default, so that's possible.

[00:35:51] Bap: Wonderful. Uh, thank you so much for, for explaining all that Anton and uh, yeah, like I think a lot of us started coding with JetBrains and we're just all super excited that Junie is out.

[00:36:01] Bap: Uh, so yeah, excited to see where all that is gonna go, but yeah, fantastic talk, super cool session. I think everybody really, really enjoyed it. So thank you so much, Anton.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

Subscribe to our podcasts here

Welcome to the AI Native Dev Podcast, hosted by Guy Podjarny and Simon Maple. If you're a developer or dev leader, join us as we explore and help shape the future of software development in the AI era.

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

THE WEEKLY DIGEST

Subscribe

Sign up to be notified when we post.

Subscribe

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join

JOIN US ON

Discord

Come and join the discussion.

Join