Logo
Back to events

Use of Fine Tune in Completion and Agent

with Nick Frolov

Also available on

Nick Frolov

4 min read13 May 2025

Head of Product, Refact.ai

In this talk

Nick Frolov, head of development at EPAM Netherlands, explores the evolving role of fine-tuning in AI development. In his talk, he demystifies how targeted fine-tuning enhances code completion and intelligent agent design—without the cost of building full LLMs. Drawing on tools like Unslot and TransformerLab, Frolov shares strategies, cautions against overfitting, and highlights how synthetic datasets and open-source projects like Refact AI are making fine-tuning more practical, efficient, and accessible than ever.

The Evolving Landscape of Fine-Tuning Large Language Models

In the ever-changing world of AI and software development, fine-tuning large language models (LLMs) has become a pivotal technique. Nick Frolov, a seasoned software engineer and head of development at EPAM Netherlands, sheds light on this crucial aspect in his talk, “Use of Fine Tune in Completion and Agents.” Drawing from his extensive experience and involvement in the open-source project Refact AI, Frolov offers practical insights into leveraging fine-tune capabilities for code completion and intelligent agent orchestration.

The Power and Rationale for Fine-Tuning

Frolov opens his presentation by acknowledging the rapid advancements AI has brought to software development. He emphasizes the importance of fine-tuning as a cost-effective alternative to building massive LLMs from scratch. “We are able to adjust some part of it, customize some behavior of the model and see how we can better use the capabilities,” he explains. This approach allows developers to adapt pre-trained models to specific tasks, enhancing their functionality without the need for extensive resources.

Tools and Platforms Enhancing Fine-Tuning

A significant part of Frolov’s talk is dedicated to discussing various tools and platforms that facilitate accessible fine-tuning. He highlights Unslot for its comprehensive data preparation capabilities and TransformerLab for its user-friendly interface that aids dataset creation. Frolov notes that major models like OpenAI's and Anthropic’s now support fine-tuning with minimal cost differences, making this technology more accessible than ever before.

Addressing Challenges in Fine-Tuning

While fine-tuning presents numerous benefits, it also comes with challenges. Frolov warns about the risks of overfitting and “catastrophic forgetting,” where excessive fine-tuning can degrade a model’s general competencies. “Fine-tuned models should be used only for the particular use cases for which they are created, not as your general model,” he cautions, highlighting the need for strategic application of this technique.

Real-World Applications: Code Completion and Intelligent Agents

Frolov delves into the practical applications of fine-tuning, particularly in code completion and the integration of AI agents with large databases. He explains that models tuned for code completion must be lightweight and fast, advocating for tuning on enterprise-specific boilerplate code. In the realm of intelligent agents, Frolov discusses generating performant queries by analyzing production data, leading to more efficient and relevant requests.

Future Directions and Accessibility

Concluding his talk, Frolov reassures practitioners that with today’s tools and APIs, fine-tuning has become significantly more straightforward. He emphasizes the role of synthetic datasets in easing data preparation and encourages the exploration of open-source tools like Refact AI. This democratization of fine-tuning technology empowers both individual developers and enterprise teams to enhance their AI capabilities.

About The Speaker

Nick Frolov

Head of Product, Refact.ai

20+ years software experience, EPAM Netherlands Head of Development, building refact.ai with ex-OpenAI researcher

Join the discussion.

Stay connected, share your thoughts, and be part of the community.

Join us on Discord